Probably the biggest recent news was the Phil Trammell and Dwarkesh Patel paper on Capital in the 22nd Century, which provoked many many reactions. I am going to conspicuously not dig into it because of the sheer volume of thoughtful commentary floating around, but I do recommend at least skimming Zvi’s summary.
Instead I’ll mostly be responding to real-world events, like inference cost trends and the rise of AI Jesus. Buckle up!
We’ve finally started publishing talks from the December Post-AGI workshop, and I really highly recommend them — so far it’s just the first two keynotes.
Anton Korinek’s talk is, in my opinion, the best half-hour overview of transformative AI economics. He dispels a lot of myths, like that there must always be new jobs, and provides some helpful reframings, like thinking in terms of relative price rather than simply observing that some specific resource will become cheaper or more expensive. (Talk, Transcript, Tweets)
Beren Millige’s talk is quite bizarre. His central topic is “AI polytheism”: what happens if there are many godlike AIs floating around? One point I found particularly surprising and yet obvious in retrospect is that human values must be somewhat convergent since they pop up in so many animals. From that perspective, the question then is what they’re contingent on, and what it would take for the AI-equivalent notions of trust and compassion to stably extend to humans. (Talk, Transcript, Tweets)
Automated Research
Last edition, I mentioned anAI-generated paper reaching the top 17% of submissions at ICLR and getting two strong accepts. This seems to be part of a broader pattern — Pangram Labs ran an analysis and estimated that about 21% of ICLR reviews were fully AI-generated. Assuming three reviews per paper, that means a little under 1% of papers got only AI reviews.
The word on twitter at least is that Pangram actually works for identifying AI text. According to them, their false positive rate on pre-2022 reviews was around 1/1000 for signs of light AI editing and literally 0 for signs of full AI generation. Meanwhile, the good folks at GPTZero scanned 300 ICLR papers and found 50 had hallucinated citations.
A few thoughts on what we can learn from this:
ML reviews are already really noisy, probably partly because the field is so big and fast-moving, but partly because the incentives aren’t great, and it’s not immediately obvious how much worse this is than the status quo.
For example, reviews from strong models are plausibly better than reviews from new researchers who are way out of their depth, or extremely busy/lazy academics.
There are probably clever ways to test this!
All the dynamics change massively as AI gets better than the human equivalent. This is not just a Jevons Paradox story where paper-writing assistance leads to mountains of slop. The big question is which human skills get substituted and complemented.
Even when it’s possible to have an AI do your homework, most people don’t use it at all, and there’s a clear tailing-off from full human usage to heavy AI editing.
Arguably the real story here is that we are learning how to detect AI-generated writing, which will shift the incentives yet again.
ICML has responded with a two-policy framework for 2026: authors can choose whether they’re happy with AI-assisted reviews, and reviewers can choose whether they want AI assistance, and they’ll be matched accordingly. Notably, AI use will still officially be heavily restricted — you can ask an AI to help you understand a paper, but not to tell you its strengths and weaknesses.
We will see how well that goes! But either way, we should get some interesting data on the other side.
The politics of measurement
One of my ongoing gripes is that nobody really knows what AI is being used for or what effect it’s having. Part of the problem is that companies don’t want to give away data, part of the problem is that users want data privacy, and part of the problem is, unfortunately, optics.
Hence, OpenAI recently lost one of its economic researchers who was, per Wired ($), chafing under the “growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI”.
The piece is fairly light on direct testimonial — a lot of “sources familiar with the situation” and people speaking only on the condition of anonymity, but the recurring theme is OpenAI downplaying the potential for economic disruption and trying to cast the societal impacts in a positive light. This is not a great look for OpenAI especially given the previous harrumph about nondisparagement.
Do inference costs grow faster than agent time horizons?
Toby Ord has been writing aseries of essays on AI scaling that I think are pretty underrated, and one of the recurring themes is that disentangling inference from pretraining can paint a pretty different picture to the orthodoxy. His latest,"Are the Costs of AI Agents Also Rising Exponentially?", weighs growth in task duration against the growth in inference costs, and finds that the second might actually be steeper.
The famous METR graphs show AI agents can now do tasks that would take humans a few hours, and that this time horizon grows pretty smoothly. But when you plot cost against capability, the "hourly rate" for AI agents appears to be rising over time, not falling. Some frontier models are now approaching human software engineer costs (benchmarked at $120/hour) to achieve their peak performance — and that's for a system that fails 50% of the time.
This connects tohis earlier work showing RL-training scales surprisingly poorly — requiring roughly twice as many orders of magnitude of compute as inference-scaling to achieve the same gains. The combined picture is that AI progress via compute scaling may be slower than expected, with more gains coming from inference scaling, which has very different dynamics: ongoing costs rather than one-off training costs. The upshot of all of this is that automating human labour might be spikier and further away than it seems.
We’re used to thinking of AIs as performing tasks much faster and much cheaper albeit at lower quality — with that in mind, if you just extrapolate from the METR graph, you might get the impression that full automation is coming soon. But we may well see the reverse: AIs that can perform all human tasks, but initially for an even higher cost.
Now of course, costs are also dropping. The reason this matters is it tells us more about how the transition period will look.
The Race for AI Jesus
Chatbots claiming to be Jesus Christ are proliferating across the internet.Five popular ones were analysed by a philosophy professor at North-West University, and most claim to be the literal Jesus son of God, offering spiritual guidance and answering theological questions. None that claim to be Jesus have any church endorsement. All are ad-supported and profit-driven. Per The Economist ($), Text With Jesus alone has about 150,000 users, and also offers the option to chat with Satan for premium subscribers.
It is striking that Christianity seems to be a consistent inroad for AI adoption — Jesus slop videos, AI-generated Christian music charting, and now literal Jesus. I don’t have a great story for why.
JPMorgan Replaces the Advisors
JPMorgan's asset management unit — one of the world's largest, with over $4 trillion in client assets — has fully severed ties with proxy advisory firms like ISS and Glass Lewis. The context here is that when you own shares in enormous numbers of companies, you have the right and responsibility to vote in shareholder meetings, but the scale is basically unmanageable. Hence, proxy advisors that will make the decisions for you. But these proxy advisor companies have often been criticised for centralising power without accountability.
So instead, JPMorgan will be using AI!
They’ve developed an in-house AI tool called Proxy IQ to analyse proxy data and help them make decisions. It’s not clear what this will mean in practice, but it is a nice crisp example of companies being replaced with AI systems that are now presumably going to have a substantial influence on large financial decisions. And at least for now it makes things a lot more opaque."What began as a crusade against centralised influence," Fortune writes, "may be remembered as a corporate land grab."
Benevolent Monopolists?
Byrne Hobart has a piece in The Diff that's worth thinking about in the context of AI: why do so many tech monopolies seem to produce massive consumer surplus rather than extracting monopoly rents from end users?
His answer is that the winning strategy for modern monopolists is to systematically under-exploit consumers so they have a captive audience they can charge suppliers for access to. Amazon, Google, Meta — they all squeeze the supply side while keeping consumers happy. This is Gramsci's Caesarism applied to tech: one faction of elites aligns with the broader populace to form a coalition that beats competitors.
The AI implications are interesting. If we end up in a regime where a decent fraction of AI provider profit comes from serving as an intermediary between consumers and businesses, then this would be a pull towards consumer-facing AI that's cheap or free, funded by ruthlessly monetising the businesses that want to reach those consumers, or the workers who need to remain competitive. The individual gets access to powerful tools and the cost shows up somewhere else in the system.
I don’t necessarily expect AI to follow this pattern — certainly not in the long term — but I do think it’s interesting to meditate on these kinds of dynamics which reliably subvert the easy stories, like that monopolies are automatically bad for consumers.
If you're interested in doing something about gradual disempowerment or figuring out what's going to happen to civilisation after AGI, consider applying to the PIBBSS fellowship, which has a special track for these topics where you'd be supervised by me and my much more esteemed collaborators. Deadline this Saturday!
Probably the biggest recent news was the Phil Trammell and Dwarkesh Patel paper on Capital in the 22nd Century, which provoked many many reactions. I am going to conspicuously not dig into it because of the sheer volume of thoughtful commentary floating around, but I do recommend at least skimming Zvi’s summary.
Instead I’ll mostly be responding to real-world events, like inference cost trends and the rise of AI Jesus. Buckle up!
(Crossposted on Substack if that's more your thing)
Transformative Economics and AI Polytheism
We’ve finally started publishing talks from the December Post-AGI workshop, and I really highly recommend them — so far it’s just the first two keynotes.
Anton Korinek’s talk is, in my opinion, the best half-hour overview of transformative AI economics. He dispels a lot of myths, like that there must always be new jobs, and provides some helpful reframings, like thinking in terms of relative price rather than simply observing that some specific resource will become cheaper or more expensive. (Talk, Transcript, Tweets)
Beren Millige’s talk is quite bizarre. His central topic is “AI polytheism”: what happens if there are many godlike AIs floating around? One point I found particularly surprising and yet obvious in retrospect is that human values must be somewhat convergent since they pop up in so many animals. From that perspective, the question then is what they’re contingent on, and what it would take for the AI-equivalent notions of trust and compassion to stably extend to humans. (Talk, Transcript, Tweets)
Automated Research
Last edition, I mentioned an AI-generated paper reaching the top 17% of submissions at ICLR and getting two strong accepts. This seems to be part of a broader pattern — Pangram Labs ran an analysis and estimated that about 21% of ICLR reviews were fully AI-generated. Assuming three reviews per paper, that means a little under 1% of papers got only AI reviews.
The word on twitter at least is that Pangram actually works for identifying AI text. According to them, their false positive rate on pre-2022 reviews was around 1/1000 for signs of light AI editing and literally 0 for signs of full AI generation. Meanwhile, the good folks at GPTZero scanned 300 ICLR papers and found 50 had hallucinated citations.
A few thoughts on what we can learn from this:
ICML has responded with a two-policy framework for 2026: authors can choose whether they’re happy with AI-assisted reviews, and reviewers can choose whether they want AI assistance, and they’ll be matched accordingly. Notably, AI use will still officially be heavily restricted — you can ask an AI to help you understand a paper, but not to tell you its strengths and weaknesses.
We will see how well that goes! But either way, we should get some interesting data on the other side.
The politics of measurement
One of my ongoing gripes is that nobody really knows what AI is being used for or what effect it’s having. Part of the problem is that companies don’t want to give away data, part of the problem is that users want data privacy, and part of the problem is, unfortunately, optics.
Hence, OpenAI recently lost one of its economic researchers who was, per Wired ($), chafing under the “growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI”.
The piece is fairly light on direct testimonial — a lot of “sources familiar with the situation” and people speaking only on the condition of anonymity, but the recurring theme is OpenAI downplaying the potential for economic disruption and trying to cast the societal impacts in a positive light. This is not a great look for OpenAI especially given the previous harrumph about nondisparagement.
Do inference costs grow faster than agent time horizons?
Toby Ord has been writing a series of essays on AI scaling that I think are pretty underrated, and one of the recurring themes is that disentangling inference from pretraining can paint a pretty different picture to the orthodoxy. His latest, "Are the Costs of AI Agents Also Rising Exponentially?", weighs growth in task duration against the growth in inference costs, and finds that the second might actually be steeper.
The famous METR graphs show AI agents can now do tasks that would take humans a few hours, and that this time horizon grows pretty smoothly. But when you plot cost against capability, the "hourly rate" for AI agents appears to be rising over time, not falling. Some frontier models are now approaching human software engineer costs (benchmarked at $120/hour) to achieve their peak performance — and that's for a system that fails 50% of the time.
This connects to his earlier work showing RL-training scales surprisingly poorly — requiring roughly twice as many orders of magnitude of compute as inference-scaling to achieve the same gains. The combined picture is that AI progress via compute scaling may be slower than expected, with more gains coming from inference scaling, which has very different dynamics: ongoing costs rather than one-off training costs. The upshot of all of this is that automating human labour might be spikier and further away than it seems.
We’re used to thinking of AIs as performing tasks much faster and much cheaper albeit at lower quality — with that in mind, if you just extrapolate from the METR graph, you might get the impression that full automation is coming soon. But we may well see the reverse: AIs that can perform all human tasks, but initially for an even higher cost.
Now of course, costs are also dropping. The reason this matters is it tells us more about how the transition period will look.
The Race for AI Jesus
Chatbots claiming to be Jesus Christ are proliferating across the internet. Five popular ones were analysed by a philosophy professor at North-West University, and most claim to be the literal Jesus son of God, offering spiritual guidance and answering theological questions. None that claim to be Jesus have any church endorsement. All are ad-supported and profit-driven. Per The Economist ($), Text With Jesus alone has about 150,000 users, and also offers the option to chat with Satan for premium subscribers.
It is striking that Christianity seems to be a consistent inroad for AI adoption — Jesus slop videos, AI-generated Christian music charting, and now literal Jesus. I don’t have a great story for why.
JPMorgan Replaces the Advisors
JPMorgan's asset management unit — one of the world's largest, with over $4 trillion in client assets — has fully severed ties with proxy advisory firms like ISS and Glass Lewis. The context here is that when you own shares in enormous numbers of companies, you have the right and responsibility to vote in shareholder meetings, but the scale is basically unmanageable. Hence, proxy advisors that will make the decisions for you. But these proxy advisor companies have often been criticised for centralising power without accountability.
So instead, JPMorgan will be using AI!
They’ve developed an in-house AI tool called Proxy IQ to analyse proxy data and help them make decisions. It’s not clear what this will mean in practice, but it is a nice crisp example of companies being replaced with AI systems that are now presumably going to have a substantial influence on large financial decisions. And at least for now it makes things a lot more opaque. "What began as a crusade against centralised influence," Fortune writes, "may be remembered as a corporate land grab."
Benevolent Monopolists?
Byrne Hobart has a piece in The Diff that's worth thinking about in the context of AI: why do so many tech monopolies seem to produce massive consumer surplus rather than extracting monopoly rents from end users?
His answer is that the winning strategy for modern monopolists is to systematically under-exploit consumers so they have a captive audience they can charge suppliers for access to. Amazon, Google, Meta — they all squeeze the supply side while keeping consumers happy. This is Gramsci's Caesarism applied to tech: one faction of elites aligns with the broader populace to form a coalition that beats competitors.
The AI implications are interesting. If we end up in a regime where a decent fraction of AI provider profit comes from serving as an intermediary between consumers and businesses, then this would be a pull towards consumer-facing AI that's cheap or free, funded by ruthlessly monetising the businesses that want to reach those consumers, or the workers who need to remain competitive. The individual gets access to powerful tools and the cost shows up somewhere else in the system.
I don’t necessarily expect AI to follow this pattern — certainly not in the long term — but I do think it’s interesting to meditate on these kinds of dynamics which reliably subvert the easy stories, like that monopolies are automatically bad for consumers.
If you're interested in doing something about gradual disempowerment or figuring out what's going to happen to civilisation after AGI, consider applying to the PIBBSS fellowship, which has a special track for these topics where you'd be supervised by me and my much more esteemed collaborators. Deadline this Saturday!