Thanks for the nudge: We'll consider producing an HTML version!
Junior researchers are often wondering what they should work on. To potentially help, we asked people at the Centre for the Governance of AI for research ideas related to longtermist AI governance. The compiled ideas are developed to varying degrees, including not just questions, but also some concrete research approaches, arguments, and thoughts on why the questions matter. They differ in scope: while some could be explored over a few months, others could be a productive use of a PhD or several years of research.
We do not make strong claims about these questions, e.g. that they are the absolute top priority at current margins. Each idea only represents the views of the person who wrote it. The ideas aren’t necessarily original. Where we think someone is already...
Thanks for the post and the critiques. I won't respond at length, other than to say two things: (i) it seems right to me that we'll need something like licensing or pre-approvals of deployments, ideally also decisions to train particularly risky models. Also that such a regime would be undergirded by various compute governance efforts to identify and punish non-compliance. This could e.g. involve cloud providers needing to check if a customer buying more than X compute have the relevant license or confirm that they are not using the compute to train ... (read more)