Toy models show that we're wearing alive-tinted glasses. In discussions of existential risk or potential apocalypses, a common refrain is something along the lines of "We've been fine before, so we'll be fine again. Sure," some argue, "we've had some close calls in the past, but we've always been fine...
Consider everything in this post speculative. I intend to provide updates once I have data from more models, more robust Starburst performance data (especially for older Claude models), and general higher confidence. This is somewhat less polished than I'd like, so I can publish it before GPT-5 releases or demos...
There are so many examples of insanely demanding AGI definitions[1] or criteria, typically involving, among other things, the ability to do something that only human geniuses can do. Usually, these criteria stem from a requirement that AGI be able to do anything any human can do. In extreme cases, people...
An AI Timeline with Perils Short of ASI By Chapin Lenthall-Cleary, Cole Gaboriault, and Alicia Lopez We wrote this for AI 2027's call for alternate timelines of the development and impact of AI over the next few years. This was originally published on The Pennsylvania Heretic on June 1st, 2025....
By Chapin Lenthall-Cleary and Cole Gaboriault As LLMs and other forms of AI have become more capable, interest has steadily grown in determining how “smart” they really are. Discussion tends to circle, often obliquely, around the following cluster of questions: are the models as smart as people? Which people? How...