Request to AGI organizations: Share your views on pausing AI progress
A few observations from the last few weeks: * On March 22, FLI published an open letter calling for a six-month moratorium on frontier AI progress. * On March 29, Eliezer Yudkowsky published a piece in TIME calling for an indefinite moratorium. * To our knowledge, none of the top AI organizations (OpenAI, DeepMind, Anthropic) have released a statement responding to these pieces. We offer a request to AGI organizations: Determine what you think about these requests for an AI pause (possibly with uncertainties acknowledged), write up your beliefs in some form, and publicly announce your position. We believe statements from labs could improve discourse, coordination, and transparency on this important and timely topic. Discourse: We believe labs are well-positioned to contribute to dialogue around whether (or how) to slow AI progress, making it more likely for society to reach true and useful positions. Coordination: Statements from labs could make coordination more likely. For example, lab A could say “we would support a pause under X conditions with Y implementation details”. Alternatively, lab B could say “we would be willing to pause if lab C agreed to Z conditions.” Transparency: Transparency helps others build accurate models of labs, their trustworthiness, and their future actions. This is especially important for labs that seek to receive support from specific communities, policymakers, or the general public. You have an opportunity to show the world how you reason about one of the most important safety-relevant topics. We would be especially excited about statements that are written or endorsed by lab leadership. We would also be excited to see labs encourage employees to share their (personal) views on the requests for moratoriums. Sometimes, silence is the best strategy. There may be attempts at coordination that are less likely to succeed if people transparently share their worldviews. If this is the case, we request that AI organizations mak
Yes, that's one value. RSPs & many policy debates around it would have been less messed up if there had been clarity (i.e. they turned a confused notion into the standard, which was then impossible to fix in policy discussions, making the Code of Practice flawed). I don't know of a specific example of preventing equivocation in other industries (it seems hard to know of such examples?) but the fact that basically all industries use a set of the same concepts is evidence that they're pretty general-purpose and repurposable.
Another is just that it helps thinking in a generalized ways about the issues.
For instance, once you see evaluations as a Key Risk Indicator... (read more)