When I started my freshman year, my median estimate for AGI was 20 years. In my senior year it was down to 3 years (although it’s gone back up to 5 years since then). My expectations of the future made my college experience somewhat unusual and I will share some...
AI 2027, Situational Awareness, and basically every scenario that tries to seriously wrestle with AGI, assume that the US and China are basically the only countries that matter in shaping the future of humanity. I think this assumption is mostly valid. But, if other countries wake up to AGI, how...
A few people have made the prediction that there’s inherent superexponentiality in time horizons. One way to define inherently superexponential time horizons is: > Even without substantial AI R&D automation, there is some reason to expect time horizons to grow faster than exponentially at some level of capabilities. There are...
How large of a breakthrough is necessary for dangerous AI? In order to cause a catastrophe, an AI system would need to be very competent at agentic tasks[1]. The best metric of general agentic capabilities is METR’s time horizon. The time horizon measures the length of well-specified software tasks AI...
Recently, I looked at the one pair of winter boots I own, and I thought “I will probably never buy winter boots again.” The world as we know it probably won’t last more than a decade, and I live in a pretty warm area. I. AGI is likely in the...
In my previous post, I made the case that surviving until AGI seems very worthwhile, and that people should consider taking actions to make that more likely. This post goes into what the most low-hanging fruit are for surviving until AGI. I’ll assume that AGI is less than 20 years...
(written for a Twitter audience) Has AI progress slowed down? I’ll write some personal takes and predictions in this post. The main metric I look at is METR’s time horizon, which measures the length of tasks agents can perform. It has been doubling for more than 6 years now, and...