Someone who is interested in learning and doing good.
My Twitter: https://twitter.com/MatthewJBar
My Substack: https://matthewbarnett.substack.com/
I expect there are no claims to the effect that there will be only one chance to correctly align the first AGI.
For the purpose of my argument, there is no essential distinction between 'the first AGI' and 'the first ASI'. My main point is to dispute the idea that there will be a special 'it' at all, which we need to align on our first and only try. I am rejecting the scenario where a single AI system suddenly takes over the world. Instead, I expect AI systems will continuously and gradually assume more control over the world over time. In my view, there will not be one decisive system, but rather a continuous process of AIs assuming greater control over time.
To understand the distinction I am making, consider the analogy of genetically engineering humans. By assumption, if the tech continues improving, there will eventually be a point where genetically engineered humans will be superhuman in all relevant respects compared to ordinary biological humans. They will be smarter, stronger, healthier, and more capable in every measurable way. Nonetheless, there is no special point at which we develop 'the superhuman'. There is no singular 'it' to build, which then proceeds to take over the world in one swift action. Instead, genetically engineered humans would simply progressively get smarter, more capable, and more powerful across time as the technology improves. At each stage of technological innovation, these enhanced humans would gradually take over more responsibilities, command greater power in corporations and governments, and accumulate a greater share of global wealth. The transition would be continuous rather than discontinuous.
Yes, at some point such enhanced humans will possess the raw capability to take control over the world through force. They could theoretically coordinate to launch a sudden coup against existing institutions and seize power all at once. But the default scenario seems more likely: a continuous transition from ordinary human control over the world to superhuman genetically engineered control over the world. They would gradually occupy positions of power through normal economic and political processes rather than through sudden conquest.
You're saying that slow growth on multiple systems means we can get one of them right, by course correcting.
That's not what I'm saying. My argument was not about multiple simultaneously existing systems growing slowly together. It was instead about how I dispute the idea of a unique or special point in time when we build "it" (i.e., the AI system that takes over the world), the value of course correction, and the role of continuous iteration.
As the review makes very clear, the argument isn't about AGI, it's about ASI. And yes, they argue that you would in fact only get one chance to align the system that takes over.
I'm aware; I was expressing my disagreement with their argument. My comment was not premised on whether we were talking about "the first AGI" or "the first ASI". I was making a more fundamental point.
In particular: I am precisely disputing the idea that there will be "only one chance to align the system that takes over". In my view, the future course of AI development will not be well described as having a single "system that takes over". Instead, I anticipate waves of AI deployment that gradually, and continuously assume more control.
I fundamentally dispute the entire framing of thinking about "the system" that we need to align on our "first try". I think AI development is an ongoing process in which we can course correct. I am disputing that there is an important, unique point when we will build "it" (i.e. the ASI).
I would strongly disagree with the notion that FOOM is “a key plank” in the story for why AI is dangerous. Indeed, one of the most useful things that I, personally, got from the book, was seeing how it is *not* load bearing for the core arguments.
I think the primary reason why the foom hypothesis seems load-bearing for AI doom is that without a rapid AI and local takeoff, we won't simply get "only one chance to correctly align the first AGI [ETA: or the first ASI]".
If foom occurs, there will be a point where a company develops an AGI that quickly transitions from being just an experimental project to something capable of taking over the entire world. This presents a clear case for caution: if the AI project you're working on will undergo explosive recursive self-improvement, then any alignment mistakes you build into it will become locked in forever. You cannot fix them after deployment because the AI will already have become too powerful to stop or modify.
However, without foom, we are more likely to see a gradual and diffuse transition from human control over the world to AI control over the world, without any single AI system playing a critical role in the transition by itself. The fact that the transition is not sudden is crucial because it means that no single AI release needs to be perfectly aligned before deployment. We can release imperfect systems, observe their failures, and fix problems in subsequent versions. Our experience with LLMs demonstrates this pattern, where we could fix errors after deployment, making sure future model releases don't have the same problems (as illustrated by Sydney Bing, among other examples).
A gradual takeoff allows for iterative improvement through trial and error, and that's simply really important. Without foom, there is no single critical moment where we must achieve near-perfect alignment without any opportunity to learn from real-world deployment. There won't be a single, important moment where we abruptly transition from working on "aligning systems incapable of taking over the world" to "aligning systems capable of taking over the world". Instead, systems will simply gradually and continuously get more powerful, with no bright lines.
Without foom, we can learn from experience and course-correct in response to real-world observations. My view is that this fundamental process of iteration, experimentation, and course correction in response to observed failures makes the problem of AI risk dramatically more tractable than it would be if foom were likely.
Roko says it's impossible, I say it's possible and likely.
I'm not sure Roko is arguing that it's impossible for capitalist structures and reforms to make a lot of people worse off. That seems like a strawman to me. The usual argument here is that such reforms are typically net-positive: they create a lot more winners than losers. Your story here emphasizes the losers, but if the reforms were indeed net-positive, we could just as easily emphasize the winners who outnumber the losers.
In general, literally any policy that harms people in some way will look bad if you focus solely on the negatives, and ignore the positives.
I recognize that. But it seems kind of lame to respond to a critique of an analogy by simply falling back on another, separate analogy. (Though I'm not totally sure if that's your intention here.)
I'm arguing that we won't be fine. History doesn't help with that, it's littered with examples of societies that thought they would be fine. An example I always mention is enclosures in England, where the elite deliberately impoverished most of the country to enrich themselves.
Is the idea here that England didn't do "fine" after enclosures? But in the century following the most aggressive legislative pushes towards enclosure (roughly 1760-1830), England led the industrial revolution, with large, durable increases in standards of living for the first time in world history -- for all social classes, not just the elite. Enclosure likely played a major role in the increase in agricultural productivity in England, which created unprecedented food abundance in England.
It's true that not everyone benefitted from these reforms, inequality increased, and a lot of people became worse off from enclosure (especially in the short-term, during the so-called Engels' pause), but on the whole, I don't see how your example demonstrates your point. If anything your example proves the opposite.
It would be helpful if you could clearly specify the "basic argumentation mistakes" you see in the original article. The parent comment mentioned two main points: (1) the claim that I'm being misleading by listing costs of an LVT without comparing them to benefits, and (2) the claim that an LVT would likely replace existing taxes rather than add to them.
If I'm wrong on point (2), that would likely stem from complex empirical issues, not from a basic argumentation mistake. So I'll focus on point (1) here.
Regarding (1), my article explicitly stated that its purpose was not to offer a balanced evaluation, but to highlight potential costs of an LVT that I believe are often overlooked or poorly understood. This is unlike the analogy given, where someone reviews a car only by noting its price while ignoring its features. The disanalogy is that, in the case of a car review, the price is already transparent and easily verifiable. However, with an LVT, the costs are often unclear, downplayed, or poorly communicated in public discourse, at least in my experience on twitter.
By pointing out these underdiscussed costs, I'm aiming to provide readers with information they may not have encountered, helping them make a more informed overall judgment. Moreover, I explicitly and prominently linked to a positive case for an LVT and encouraged readers to compare both perspectives to reach a final conclusion.
A better analogy would be a Reddit post warning that although a car is advertised at $30,000, the true cost is closer to $60,000 after hidden fees are included. That post would still add value, even if it doesn’t review the car in full, because it would provide readers valuable information they might not be familiar with. Likewise, my article aimed to contribute by highlighting costs of an LVT that might otherwise go unnoticed.
Doesn't the revealed preference argument also imply people don't care much about dying from aging? (This is invested in even less than catastrophic risk mitigation and people don't take interventions that would prolong their lives considerably.) I agree revealed preferences imply people care little about the long run future of humanity, but they do imply caring much more about children living full lives than old people avoiding aging.
I agree that the amount of funding explicitly designated for anti-aging research is very low, which suggests society doesn't prioritize curing aging as a social goal. However, I think your overall conclusion is significantly overstated. A very large fraction of conventional medical research specifically targets health and lifespan improvements for older people, even though it isn't labeled explicitly as "anti-aging."
Biologically, aging isn't a single condition but rather the cumulative result of multiple factors and accumulated damage over time. For example, anti-smoking campaigns were essentially efforts to slow aging by reducing damage to smokers' bodies—particularly their lungs—even though these campaigns were presented primarily as life-saving measures rather than "anti-aging" initiatives. Similarly, society invests a substantial amount of time and resources in mitigating biological damage caused by air pollution and obesity.
Considering this broader understanding of aging, it seems exaggerated to claim that people aren't very concerned about deaths from old age. I think public concern depends heavily on how the issue is framed. My prediction is that if effective anti-aging therapies became available and proven successful, most people would eagerly purchase them for high sums, and there would be widespread political support to subsidize those technologies.
Right now explicit support for anti-aging research is indeed politically very limited, but that's partly because robust anti-aging technologies haven't been clearly demonstrated yet. Medical technologies that have proven effective at slowing aging (even if not labeled as such) have generally been marketed as conventional medical technologies and typically enjoy widespread political support and funding.
Is it your claim here that the book is arguing the conditional: "If there's an intelligence explosion, then everyone dies?" If so, then it seems completely valid to counterargue: "Well, an intelligence explosion is unlikely to occur, so who cares?"