Artificial General Intelligence (AGI) poses an extinction risk to all known biological life. Given the stakes involved -- the whole world -- we should be looking at 10% chance-of-AGI-by timelines as the deadline for catastrophe prevention (a global treaty banning superintelligent AI), rather than 50% (median) chance-of-AGI-by timelines, which seem...
Are you passionate about pushing for a global halt to AGI development? An international treaty banning superintelligent AI? Pausing AI? Before it’s too late to prevent human extinction? Would you like to live with a group of like-minded people pushing for the same? Do you want to do much more,...
Ilya thought back again to when he’d overheard that conversation between two of his junior colleagues. They were discussing an “AI ending the world” story. This was early in the year, although it felt like years ago now given how fast things had been moving. The year, 2025, was now...
[Added 13Jun: Submitted to OpenPhil AI Worldviews Contest - this pdf version most up to date] Content note: discussion of a near-term, potentially hopeless[1] life-and-death situation that affects everyone. Tldr: AGI is basically here. Alignment is nowhere near ready. We may only have a matter of months to get a...