I spend several hours a day trying to keep up with what’s going on in the parts of AI that I’m interested in. It’s a ridiculous amount of work: I don’t recommend it unless you’re doing something silly like writing a newsletter about AI.
But if you’d like to keep up with AI without spending your entire life on it, I have advice about who to follow. My recommendations center on the areas I’m most interested in: AI safety and strategy, capabilities and evaluations, and predicting the trajectory of AI.
If I could only follow one person, it would unquestionably be Zvi. He’s comprehensive in his coverage and has consistently solid insight into everything that’s happening in AI.
Zvi has one huge downside: he’s staggeringly prolific. In the first half of April he posted 11 times, for a total of about 97,000 words (roughly a novel). I read everything he writes because I’m insane, but I recommend you just skim his posts looking for the most interesting parts.
The AI Futures Project is best known for AI-2027, a scenario of how AI might unfold over the next few years. They are epistemically rigorous and very thoughtful in how they approach some very hard questions. By far the best source of useful predictions about where we’re headed.
Jack (who in his spare time runs the Anthropic Institute) writes an excellent weekly newsletter. He doesn’t try to be comprehensive, but picks a few papers or topics each week to go deep on. Excellent curation, outstanding analysis.
Dean is an insightful writer who describes his focus as “emerging technology and the future of governance”. He has perhaps thought harder than anyone about how to integrate transformative AI into a classical liberal framework, as well as how government should and shouldn’t manage AI.
Ryan’s an AI researcher and prolific writer with deep insight into the technical side of AI. I appreciate both his technical understanding of capabilities as well as his willingness to make informed guesses and extrapolations.
80,000 Hours is best known for giving career advice to people who want to help solve the world’s most pressing problems. But on the side, they run an excellent podcast. The guests and topics are well-chosen and I appreciate that they not only provide a transcript, but also a detailed summary of the interview. The world would be a better place if every podcast provided such comprehensive supplementary materials.
Dwarkesh is an outstanding interviewer who clearly does extensive preparation before each interview. He gets excellent guests and makes the most of them, although his interviews often run very long. Also, his beard is magnificent.
I don’t always agree with Anton, but I always come away from his writing feeling smarter about something important. He occupies an interesting niche: neither blow by blow political news nor abstract political philosophy, but rather thoughtful analysis of current political currents, with solid strategic advice.
Transformer produces a weekly newsletter as well as articles on particular topics. I particularly like their broad coverage: they often include news that many of my other feeds don’t. The newsletter is always good, as are some of the articles.
Epoch’s a fantastic source for more technical trends: GPU production, compute usage during training, capability gaps between open and closed models, etc.
If you want to go deeper in a particular area, here 28 more sources that are particularly good, organized by topic.
Ajeya works at METR and does consistently strong work on measuring and predicting AI capabilities. I’ve found Six milestones for AI automation helpful for clarifying my own thinking about timelines.
Helen blogs infrequently, but her articles are invariably excellent, with a knack for identifying the most important high-level questions about AI. Taking Jaggedness Seriously is typical of her work.
Prinz is a generalist who covers a range of topics with a focus on capabilities and using AI for legal work. His account on X often features commentary on current news.
Anthropic Research is a great source of alignment and interpretability work. The summaries are somewhat technical, but should be accessible to anyone who follows AI seriously. Emotion concepts and their function in a large language model is typical of the research they feature.
Am I actually recommending a European government organization as good source of information about AI? Strangely, I am doing exactly that. UK AISI does consistently very strong work on safety evaluations and related topics. Their analysis of Mythos’ cyber capabilities is typical of their careful, in-depth work.
Karpathy is a legend for his work at OpenAI and Tesla as well as his ridiculously good ML tutorials. He isn’t a prolific poster, but when he does post (mostly about ML and coding), it’s always worth reading. His recent post on LLM Knowledge Bases has been deservedly popular.
Beren posts infrequently, but I’ve found him to be consistently insightful. He tends to post about important topics that other people haven’t noticed, which is particularly useful. Do we want obedience or alignment? is an excellent introduction to one of the most important questions in alignment.
Daniel writes frequently about using AI for math. He strikes a rare balance: he’s appropriately skeptical about the vast amounts of hype, but clear-eyed about what AI is capable of and where it’s headed. Mathematics in the Library of Babel is an excellent overview of current AI capabilities in math.
He doesn’t write often, but his work is always worth reading. He’s a security expert who recently joined Anthropic (you may have seen his name come up in some of the discussion about Mythos). Machines of Ruthless Efficiency is a year old but holds up well.
In-depth articles exploring a range of topics and perspectives related to AI policy and impacts. I particularly liked this recent piece exploring how AI might affect wages.
Thoughtful, in-depth pieces about AI policy, safety, and impacts. The subtitle is “big questions and big ideas on artificial intelligence”, which sums it up nicely.
ChinaTalk is my favorite source of news and analysis on AI in China as well as Chinese society and politics more broadly. Their pieces often run long—I’m selective about which ones I read, but I get a lot of value from them.
Reading Forethought is like stumbling upon a really good late night hallway conversation about possible future applications of AI. Speculative, but thoughtful and high quality.
Windfall Trust is one of the best sources I know of for information and policy ideas about jobs, the economy, and the social contract in the age of AI. The Windfall Policy Atlas does a great job of collecting information about numerous policy options in a single well-organized place.
Boaz (OpenAI) sometimes posts long articles, but I largely follow him for his frequent commentary on recent news and papers. He seems too nice to be allowed on X.
Jasmine Sun covers the culture of tech and Silicon Valley, as well as politics. I highly recommend my week with the AI populists: she does a great job of shedding light on what’s becoming a central force in AI politics.
Steve Hsu’s far-ranging Manifold podcast covers AI as well as physics, genetics, China, and more. Episodes often feature material from his upcoming documentary Dreamers and Doomers (most recently an interview with Richard Ngo).
OpenAI publishes frequently—it’s worth keeping an eye on their stream, even though you probably won’t want to read much of it. There are some gems here, although a lot of it is beautifully polished corporate nothing-speak.
I spend several hours a day trying to keep up with what’s going on in the parts of AI that I’m interested in. It’s a ridiculous amount of work: I don’t recommend it unless you’re doing something silly like writing a newsletter about AI.
But if you’d like to keep up with AI without spending your entire life on it, I have advice about who to follow. My recommendations center on the areas I’m most interested in: AI safety and strategy, capabilities and evaluations, and predicting the trajectory of AI.
Let’s start with the top 10.
Zvi Mowshowitz
Substack: Don’t Worry About the Vase
Best for: comprehensive coverage, opinionated insight
Example: AI #163: Mythos Quest
If I could only follow one person, it would unquestionably be Zvi. He’s comprehensive in his coverage and has consistently solid insight into everything that’s happening in AI.
Zvi has one huge downside: he’s staggeringly prolific. In the first half of April he posted 11 times, for a total of about 97,000 words (roughly a novel). I read everything he writes because I’m insane, but I recommend you just skim his posts looking for the most interesting parts.
AI Futures Project
Substack: AI Futures Project
Best for: epistemically rigorous predictions
Example: AI-2027
The AI Futures Project is best known for AI-2027, a scenario of how AI might unfold over the next few years. They are epistemically rigorous and very thoughtful in how they approach some very hard questions. By far the best source of useful predictions about where we’re headed.
Jack Clark
Substack: Import AI
Best for: weekly analysis of a few topics
Example: Import AI 452
Jack (who in his spare time runs the Anthropic Institute) writes an excellent weekly newsletter. He doesn’t try to be comprehensive, but picks a few papers or topics each week to go deep on. Excellent curation, outstanding analysis.
Dean Ball
Substack: Hyperdimensional
Best for: Insightful analysis of AI progress and strategy
Example: On Recursive Self-Improvement (Part I)
Dean is an insightful writer who describes his focus as “emerging technology and the future of governance”. He has perhaps thought harder than anyone about how to integrate transformative AI into a classical liberal framework, as well as how government should and shouldn’t manage AI.
Ryan Greenblatt
Less Wrong: Ryan Greenblatt
Best for: deep technical analysis of AI capabilities and progress
Example: My picture of the present in AI
Ryan’s an AI researcher and prolific writer with deep insight into the technical side of AI. I appreciate both his technical understanding of capabilities as well as his willingness to make informed guesses and extrapolations.
80,000 Hours podcast
80,000 Hours podcast
Best for: well-curated interviews
Example: Ajeya Cotra
80,000 Hours is best known for giving career advice to people who want to help solve the world’s most pressing problems. But on the side, they run an excellent podcast. The guests and topics are well-chosen and I appreciate that they not only provide a transcript, but also a detailed summary of the interview. The world would be a better place if every podcast provided such comprehensive supplementary materials.
Dwarkesh Patel
Substack: Dwarkesh Patel
Best for: long, well-researched interviews
Example: AI-2027 with Daniel Kokotajlo and Scott Alexander
Dwarkesh is an outstanding interviewer who clearly does extensive preparation before each interview. He gets excellent guests and makes the most of them, although his interviews often run very long. Also, his beard is magnificent.
Anton Leicht
Substack: Threading the Needle
Best for: US and global AI politics
Example: Press Play to Continue
I don’t always agree with Anton, but I always come away from his writing feeling smarter about something important. He occupies an interesting niche: neither blow by blow political news nor abstract political philosophy, but rather thoughtful analysis of current political currents, with solid strategic advice.
Transformer
Substack: Transformer
Best for: broader coverage of AI
Example: April 10 Transformer Weekly
Transformer produces a weekly newsletter as well as articles on particular topics. I particularly like their broad coverage: they often include news that many of my other feeds don’t. The newsletter is always good, as are some of the articles.
Epoch AI
Substack: Epoch AI
Best for: hard data on industry trends
Example: The Epoch Brief—March 2026
Epoch’s a fantastic source for more technical trends: GPU production, compute usage during training, capability gaps between open and closed models, etc.
If you want to go deeper in a particular area, here 28 more sources that are particularly good, organized by topic.
Analysis and prediction
Ajeya Cotra (X)
Ajeya works at METR and does consistently strong work on measuring and predicting AI capabilities. I’ve found Six milestones for AI automation helpful for clarifying my own thinking about timelines.
Daniel Kokotajlo (X)
Founded the AI Futures Project and worked on their AI-2027 scenario. His forecasting work is outstanding and his X feed is particularly well curated.
Helen Toner (Substack)
Helen blogs infrequently, but her articles are invariably excellent, with a knack for identifying the most important high-level questions about AI. Taking Jaggedness Seriously is typical of her work.
Prinz (Substack)
Prinz is a generalist who covers a range of topics with a focus on capabilities and using AI for legal work. His account on X often features commentary on current news.
Steve Newman (Substack)
Steve is an infrequent writer whose pieces about the trajectory of AI are invariably excellent. 45 thoughts about agents is a recent favorite.
Understanding AI (Substack)
Understanding AI is a generalist newsletter with broader coverage than many of the other sources I’ve listed.
Safety, alignment, model psychology
AI Safety Newsletter
Does exactly what it says on the tin—it’s perhaps the single best place to find all the latest safety news.
Anthropic Research (web)
Anthropic Research is a great source of alignment and interpretability work. The summaries are somewhat technical, but should be accessible to anyone who follows AI seriously. Emotion concepts and their function in a large language model is typical of the research they feature.
Jeffrey Ladish (X)
Jeffrey is a reliable source of safety-focused commentary on recent developments.
UK AISI (web)
Am I actually recommending a European government organization as good source of information about AI? Strangely, I am doing exactly that. UK AISI does consistently very strong work on safety evaluations and related topics. Their analysis of Mythos’ cyber capabilities is typical of their careful, in-depth work.
Coding and technical
Andrej Karpathy (X)
Karpathy is a legend for his work at OpenAI and Tesla as well as his ridiculously good ML tutorials. He isn’t a prolific poster, but when he does post (mostly about ML and coding), it’s always worth reading. His recent post on LLM Knowledge Bases has been deservedly popular.
Beren (Substack)
Beren posts infrequently, but I’ve found him to be consistently insightful. He tends to post about important topics that other people haven’t noticed, which is particularly useful. Do we want obedience or alignment? is an excellent introduction to one of the most important questions in alignment.
Boris Cherny (X)
Nothing special, just the guy who came up with Claude Code. His feed is a one of the best ways to keep up with the barrage of new CC features.
Daniel Litt (X)
Daniel writes frequently about using AI for math. He strikes a rare balance: he’s appropriately skeptical about the vast amounts of hype, but clear-eyed about what AI is capable of and where it’s headed. Mathematics in the Library of Babel is an excellent overview of current AI capabilities in math.
Nicholas Carlini (web)
He doesn’t write often, but his work is always worth reading. He’s a security expert who recently joined Anthropic (you may have seen his name come up in some of the discussion about Mythos). Machines of Ruthless Efficiency is a year old but holds up well.
Simon Willison (web)
Simon’s an extremely prolific poster and one of my primary sources of news and insight about agentic coding.
Policy, governance, and strategy
AI Frontiers (web)
In-depth articles exploring a range of topics and perspectives related to AI policy and impacts. I particularly liked this recent piece exploring how AI might affect wages.
AI Policy Perspectives (Substack)
Thoughtful, in-depth pieces about AI policy, safety, and impacts. The subtitle is “big questions and big ideas on artificial intelligence”, which sums it up nicely.
Benjamin Todd (Substack)
Benjamin’s piece on How AI-driven feedback loops could make things very crazy, very fast is typical of his work: speculative, but well grounded in facts and technical understanding.
ChinaTalk
ChinaTalk is my favorite source of news and analysis on AI in China as well as Chinese society and politics more broadly. Their pieces often run long—I’m selective about which ones I read, but I get a lot of value from them.
Forethought (Substack)
Reading Forethought is like stumbling upon a really good late night hallway conversation about possible future applications of AI. Speculative, but thoughtful and high quality.
Windfall Trust (Substack)
Windfall Trust is one of the best sources I know of for information and policy ideas about jobs, the economy, and the social contract in the age of AI. The Windfall Policy Atlas does a great job of collecting information about numerous policy options in a single well-organized place.
Industry
Andy Masley (Substack)
Andy is the go-to guy for rebutting the endless stream of nonsense claims about AI and the environment. Start with this one.
Boaz Barak (X)
Boaz (OpenAI) sometimes posts long articles, but I largely follow him for his frequent commentary on recent news and papers. He seems too nice to be allowed on X.
Jasmine Sun (Substack)
Jasmine Sun covers the culture of tech and Silicon Valley, as well as politics. I highly recommend my week with the AI populists: she does a great job of shedding light on what’s becoming a central force in AI politics.
Manifold (web)
Steve Hsu’s far-ranging Manifold podcast covers AI as well as physics, genetics, China, and more. Episodes often feature material from his upcoming documentary Dreamers and Doomers (most recently an interview with Richard Ngo).
Nathan Lambert (Substack)
Nathan’s my go-to for news and opinion about open models. Championing American open models isn’t an easy role, but he does it well.
OpenAI (web)
OpenAI publishes frequently—it’s worth keeping an eye on their stream, even though you probably won’t want to read much of it. There are some gems here, although a lot of it is beautifully polished corporate nothing-speak.