For instance, might there be a maintained list of attempts at global agreement, whether they be public or private?

One unenforceable but highly endorsed example is the Future of Life's AI Open Letter, which has now attracted ~8,000 signatures from AI safety researchers, AGI researchers and other AI-adjacent technologists. It's not immediately clear what percentage of AI safety concerned individuals this represents, but at a cursory glance it would appear to be the largest consensus to date. The letter is merely an agreement to address AI safety sooner rather than later, so I am interested to hear of any agreements that address AI safety policy itself, even if the agreement is considered largely unsuccessful.

Please feel free to answer with personal views on global coordination.

New to LessWrong?

New Answer
New Comment