Related: AI policy ideas: Reading list.
This document is about ideas for AI labs. It's mostly from an x-risk perspective. Its underlying organization black-boxes technical AI stuff, including technical AI safety.
Maybe I should make a separate post on desiderata for labs (for existential safety).
See generally The Role of Cooperation in Responsible AI Development (Askell et al. 2019).
Transparency enables coordination (and some regulation).
Labs should minimize/delay the diffusion of their capabilities research.
Some sources are roughly sorted within sections by a combination of x-risk-relevance, quality, and influentialness– but sometimes I didn't bother to try to sort them, and I haven't read all of them.
Please have a low bar to suggest additions, substitutions, rearrangements, etc.
Current as of: 9 July 2023.
At various levels of abstraction, coordination can look like:
- Avoiding a race to the bottom
- Internalizing some externalities
- Sharing some benefits and risks
- Differentially advancing more prosocial actors?
- More?
Policymaking in the Pause (FLI 2023) cites A Systematic Review on Model Watermarking for Neural Networks (Boenisch 2021); I don't know if that source is good. (Note: this disclaimer does not imply that I know that the other sources in this doc are good!)
I am not excited about watermarking. (Note: this disclaimer does not imply that I am excited about the other ideas in this doc! But I am excited about most of them.)