Modeling AGI Safety Frameworks with Causal Influence Diagrams

by xrchz 5mo21st Jun 20191 min read6 comments

42

Ω 13


Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This is a linkpost for https://arxiv.org/abs/1906.08663

We have written a paper that represents various frameworks for designing safe AGI (e.g., RL with reward modeling, CIRL, debate, etc.) as Causal Influence Diagrams (CIDs), to help us compare frameworks and better understand the corresponding agent incentives.

We would love to get comments, especially on

  • Are the depicted frameworks represented accurately?
  • Is the CID representation helpful?
  • Frameworks we did not include that would be useful to model this way?

The paper's abstract:

Proposals for safe AGI systems are typically made at the level of frameworks, specifying how the components of the proposed system should be trained and interact with each other. In this paper, we model and compare the most promising AGI safety frameworks using causal influence diagrams. The diagrams show the optimization objective and causal assumptions of the framework. The unified representation permits easy comparison of frameworks and their assumptions. We hope that the diagrams will serve as an accessible and visual introduction to the main AGI safety frameworks.