The Compendium, A full argument about extinction risk from AGI
We (Connor Leahy, Gabriel Alfour, Chris Scammell, Andrea Miotti, Adam Shimi) have just published The Compendium, which brings together in a single place the most important arguments that drive our models of the AGI race, and what we need to do to avoid catastrophe. We felt that something like this has been missing from the AI conversation. Most of these points have been shared before, but a “comprehensive worldview” doc has been missing. We’ve tried our best to fill this gap, and welcome feedback and debate about the arguments. The Compendium is a living document, and we’ll keep updating it as we learn more and change our minds. We would appreciate your feedback, whether or not you agree with us: * If you do agree with us, please point out where you think the arguments can be made stronger, and contact us if there are ways you’d be interested in collaborating in the future. * If you disagree with us, please let us know where our argument loses you and which points are the most significant cruxes - we welcome debate. Here is the twitter thread and the summary: > The Compendium aims to present a coherent worldview about the extinction risks of artificial general intelligence (AGI), an artificial intelligence that exceeds that of humans, in a way that is accessible to non-technical readers who have no prior knowledge of AI. A reader should come away with an understanding of the current landscape, the race to AGI, and its existential stakes. > > AI progress is rapidly converging on building AGI, driven by a brute-force paradigm that is bottlenecked by resources, not insights. Well-resourced, ideologically motivated individuals are driving a corporate race to AGI. They are now backed by Big Tech, and will soon have the support of nations. > > People debate whether or not it is possible to build AGI, but most of the discourse is rooted in pseudoscience. Because humanity lacks a formal theory of intelligence, we must operate by the empirical observation that AI c
Might definitely be the case. When I do that, I find that I have a lot of fragments, bits and pieces, and that they don't make a coherent whole. But might be just me.
... (read more)