I recently gave a talk at EA Summit Vancouver '25 exploring dual catastrophic risks we face from advanced AI.
Intended audience: This was a foundation-level talk intended to give newcomers a solid overview of AI risk, though I hope those with more background might still find the framing or specific arguments valuable.
Recorded talk link (25 minutes): https://youtu.be/x53V2VCpz8Q?si=yVCRtCIb9lXZnWnb&t=59
The core question: How do we thread the needle between AI that escapes our control (alignment failure) and AI that concentrates unprecedented power in the hands of a few (successful alignment to narrow interests)?
The talk examines three possible AI futures (not exhaustive, but three particularly plausible and important scenarios I wanted the audience to consider):
Much of the AI safety discourse has focused on the alignment problem—ensuring AI systems do what we intend. While this talk covers that foundational challenge, I also emphasize that solving narrow alignment (AI doing what its operators want) without addressing broader concerns could lead to extreme power concentration. This isn't a novel insight—many have written about and are working on power concentration risks as well—but I think the discourse has somewhat over-indexed on misalignment relative to the power concentration risks that successful alignment could enable.
The goal is to help people understand both dimensions of the problem while motivating action rather than despair.
I use the analogy of an 8-year-old CEO trying to hire adults to run a trillion-dollar company (borrowed from Ajeya Cotra's post on Cold Takes, really like this analogy) to illustrate the alignment problem, and explore why "just pull the plug" isn't a viable solution once we become dependent on AI systems.
The talk also covers current progress on dual-purpose solutions (helping with both risks) versus targeted interventions, including work on interpretability, compute governance, and international coordination—though given the tight timeline for preparing this talk, the solutions section could certainly be more comprehensive (and I also had to read from my notes a lot due to less rehearsal time).
I'd be interested in any feedback (e.g. via comments, DMs or anonymously). Here are some questions I'm particularly interested in:
Thanks to the EA Summit Vancouver '25 organizers for putting on a fantastic summit and for the opportunity to present this talk there.