We are in a strange situation: the big labs take AI risk more seriously than the government, and are doing a better job of preparing for it. There has been surprising progress on alignment over the last few years, and some strong work figuring out how to shape model behavior....
Nobody said the path would be clear. We know we need to prepare for AGI, but how do we do that if we don’t know whether it’s coming in 3 years or in 100? What about recursive self improvement: will that escalate to superintelligence, or fizzle out? And as the...
There’s broad (though not universal) agreement that present day AI is probably not conscious, but very little agreement about whether consciousness is likely to emerge as we move toward AGI. This isn’t an abstract question: AI consciousness has major implications for alignment. Further, a conscious AI might have moral rights...
I’m pleased to report that I have no new AI-related crises for you this week. Instead we get to focus on the fun parts, starting with physical constraints on AI development. Dylan Patel explains how power, GPUs, and memory will each be crucial bottlenecks on AI development over the next...
The conflict between the Department of War and Anthropic has quieted somewhat, but nothing has been resolved and a catastrophic outcome is still entirely possible. Regardless of what happens next, two things are very clear. This is the least political that AI will ever be. Politicians are finally waking up...
Last week’s conflict between the Department of War and Anthropic marked a turning point for AI. I’m cautiously hopeful that the parties involved will find some kind of deescalation from the current nuclear option, but irreparable damage has already been done: to Anthropic, to the entire AI industry, and to...
I’m on vacation, so this week’s newsletter is a bit lighter than usual. I wish I could say that the torrent of AI news was also lighter, but… yeah, not so much. Our focus this week is on politics and strategy. We have two pieces on populist anger about AI,...