I commented on the Substack:
John Collison: To put numbers on this, you've talked about the potential for a 10% annual economic growth powered by AI. Doesn't that mean that when we talk about AI risk, it's often harms and misuses of AI, but isn't the big AI risk that we slightly misregulated or we slowed down progress, and therefore there's just a lot of human welfare that's missed out on because you don't have enough AI?
Dario's former colleague at OpenAI, Paul Christiano, has a great 2014 blog post "On Progress and Prosperity" that does a good job explaining why I don't believe this.
In short, "It seems clear that economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course."
"For example, if exponential growth continued at 1% of its current rate for 1% of the remaining lifetime of our sun, Robin Hanson points out each atom in our galaxy would need to be about 10140 times as valuable as modern society."
"So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants--they will live in a world that is "saturated," where progress has run its course and has only very modest further effects."
"I think this is sufficient to respond to the original argument: we have seen progress associated with good outcomes, and we have a relatively clear understanding of the mechanism by which that has occurred. We can see pretty clearly that this particular mechanism doesn't have much effect on very long-term outcomes."
The key assumption is that people care about the very long future, and care about it as much as people care about the near future, like in 10-30 years or more, and I don't see much evidence for people caring about outcomes thousands of years into the future, let alone millions or billions or more years.
More generally, one of my central disagreements with people like Habryka is that I think humans have much higher intrinsic discount rates than 0%, and long-term moral views that view future people just as valuable as present people are way rarer than people on here think, and we are self-selected to have low discount rates.
Dario Amodei: "Now, I'm not at all an advocate of like, "Stop the technology. Pause the technology." I think for a number of reasons, I think it's just not possible. We have geopolitical adversaries; they're not going to not make the technology, the amount of money... I mean, if you even propose even the slightest amount of... I have, and I have many trillions of dollars of capital lined up against me for whom that's not in their interest. So, that shows the limits of what is possible and what is not."
Anthropic has a March 2023 blog post "Core Views on AI Safety: When, Why, What, and How" that says:
If we’re in a pessimistic scenario [in which "AI safety is an essentially unsolvable problem – it’s simply an empirical fact that we cannot control or dictate values to a system that’s broadly more intellectually capable than ourselves – and so we must not develop or deploy very advanced AI systems"]… Anthropic’s role will be to provide as much evidence as possible that AI safety techniques cannot prevent serious or catastrophic safety risks from advanced AI, and to sound the alarm so that the world’s institutions can channel collective effort towards preventing the development of dangerous AIs. If we’re in a “near-pessimistic” scenario, this could instead involve channeling our collective efforts towards AI safety research and halting AI progress in the meantime. Indications that we are in a pessimistic or near-pessimistic scenario may be sudden and hard to spot. We should therefore always act under the assumption that we still may be in such a scenario unless we have sufficient evidence that we are not.
So Anthropically has has specifically written that we may need to halt AI progress and prevent the development of dangerous AIs, and now we have Dario saying that he is not at all an advocate of pausing the technology, and even even is going so far as to say that it's not possible to pause it.
In the same post, Anthropic wrote "It's worth noting that the most pessimistic scenarios might look like optimistic scenarios up until very powerful AI systems are created. Taking pessimistic scenarios seriously requires humility and caution in evaluating evidence that systems are safe."
It doesn't seem like Dario is doing what Anthropic wrote we should do: "We should therefore always act under the assumption that we still may be in such a [pessimistic] scenario unless we have sufficient evidence that we are not." We clearly don't have sufficient evidence that we are not in such a situation, especially since "the most pessimistic scenarios might look like optimistic scenarios up until very powerful AI systems are created."
This is a link-post of John Collison's August 6, 2025 podcast interview with Dario Amodei: https://cheekypint.substack.com/p/a-cheeky-pint-with-anthropic-ceo
Key excerpt:
John Collison:
To put numbers on this, you've talked about the potential for a 10% annual economic growth powered by AI. Doesn't that mean that when we talk about AI risk, it's often harms and misuses of AI, but isn't the big AI risk that we slightly misregulated or we slowed down progress, and therefore there's just a lot of human welfare that's missed out on because you don't have enough AI?Dario Amodei:
Yeah. Well, I've had the experience where I've had family members die of diseases that were cured a few years after they die, so I truly understand the stakes of not making progress fast enough. I would say that some of the dangers of AI have the potential to significantly destabilize society or threaten humanity or civilization, and so I think we don't want to take idle chances with that level of risk.Now, I'm not at all an advocate of like, "Stop the technology. Pause the technology." I think for a number of reasons, I think it's just not possible. We have geopolitical adversaries; they're not going to not make the technology, the amount of money... I mean, if you even propose even the slightest amount of... I have, and I have many trillions of dollars of capital lined up against me for whom that's not in their interest. So, that shows the limits of what is possible and what is not.
But what I would say is that instead of thinking about slowing it down versus going at the maximum speed, are there ways that we can introduce safety, security measures, think about the economy in ways that either don't slow the technology down or only slow it down a little bit? If, instead of 10% economic growth, we could have 9% economic growth and buy insurance against all of these risks. I think that's what the trade-off actually looks like. And precisely because AI is a technology that has the potential to go so quickly, to solve so many problems, I see the greater risk as the thing could overheat, right? And so I don't want to stop the reaction, I want to focus it. That's how I think about it.