Hello! I am a program lead at Partnership on AI (PAI), a non-profit bringing together academic, civil society, industry, & media organizations to advance responsible AI. Work with me and my colleagues to inform and shape a new project on AI safety at PAI! 

Apply: Safety-Critical AI (SCAI) Senior Advisor

Applications are due on December 10, 2021 but its a rolling deadline so we will consider applications till the role is filled . This is a remote, contract, consulting position beginning in January 2022. 

We are seeking a senior advisor for a forthcoming workstream on norms for responsible deployment of AI products to 

1. Inform the research and program agenda for the project, 

2. Create a roadmap outlining research directions and practical outputs

What the new project is about

The goal of the new workstream on deployment norms is to develop recommendations and resources to improve industry coordination on AI safety ie. reducing risks posed by deployment of powerful AI. This workstream will consider questions around how to enable access to models, encourage incident sharing, and facilitate better understanding of risks, to enhance safety efforts. You will have the *unique* opportunity to shape a multistakeholder workstream and to develop best practices alongside individuals in civil society, academia and industry from PAI’s Partner network, and to inform key decision makers in government and industry. 

Who we are looking for

The position is designed for well established researchers or practitioners who have an extensive track record of leading research, policy, or strategy work in AI safety; and have a strong interest in developing governance strategies to prevent a “race to the bottom” on safety. We are NOT being prescriptive about years of experience or educational background. We welcome applications from candidates from different disciplinary backgrounds including engineering, statistics, political science or law. 

What have we been upto on AI safety 

PAI develops tools, recommendations, and other resources by inviting voices from across the AI community and beyond to share insights that can be synthesized into actionable guidance. Some of our other ongoing projects in our Safety-Critical AI Program include: 

  • Publication Norms for Responsible AI - project examining how novel research can be published in a way that maximizes benefits while mitigating potential harms
  • AI Incidents Database - a central repository of over 1300 real-world incident reports aimed at improving safety efforts by increasing awareness.

Questions?

Check out our job description for a lot more information on the scope of work. For more on Partnership on AI, check out our website and a list of other projects we have been working on. If you are interested in applying and have any questions, leave a comment or you can contact me at madhu@partnershiponai.org.

New to LessWrong?

New Comment