Looking for AI Safety Experts to Provide High Level Guidance for RAISE

by ofer 1 min read6th May 20185 comments


The Road to AI Safety Excellence (RAISE) initiative aims to allow aspiring AI safety researchers and interested students to get familiar with the research landscape effectively; thereby hopefully increasing the number of researchers that contribute to the field. To that end, we (the RAISE team) are trying to build a high-quality online course. You can see our pilot lesson here (under “Corrigibility 1”).

Most of the course segments will be based on distilled summaries of one or more papers. We already distilled ~9 papers on corrigibility for the first course segments, and used the distilled summaries to write video script drafts.

Our long-term goal is to cover as much of the AI safety research landscape as possible, in the most useful way possible. Therefore, we need guidance from experts who have extensive familiarity with the literature in one of the broad subfields of AI safety (i.e. the machine learning perspective or the Agent Foundations research agenda; or broad parts thereof). We realize that the time of such experts is a critically scarce resource. Therefore, we will ask them only for high-level guidance including:

1) Their idea of a good structure for a part of the course: a list of sections, and the subsections that might constitute each one.

2) Pointers to papers to base each subsection on.

If an expert expects contributing further to RAISE to be an effective use of their time, they could also choose to go over our lesson scripts and provide feedback before the videos are being recorded.

Should this role be an effective use of your time, please contact us at raise@aisafety.camp