In 2018 the Secretary-General of the United Nations, António Guterres, said that “Climate change is the defining issue of our time”. That was a pretty reasonable thing to say in 2018, but I’m not so sure it’s reasonable now. Folks in the climate community probably need to acknowledge that AI is at least as dangerous as climate change and it’s likely to come at us much faster.
I think the timing of these two parallel threats matters. We need to know how to allocate resources, how to communicate with the public, and how to plan for the future.
To help, I’ve created a timeline. Along the top you can see the earliest onset times for some important climate tipping points. Crossing any tipping point could push us into a new, painful equilibrium that is difficult to reverse on human timescales, even if we had huge amounts of resources to throw at the problem.
Timeline details: AGI timelines are based on Metaculus question 5121 with median 2033, lower 25%= 2028 and upper 75%=2045. Climate events show when we could first expect to feel the effects of a particular tipping point. Studies differ widely; here I err on the side of giving the earliest time reported for onset of climate events, so these are aggressive timelines for AMOC and Amazon dieback. The 2037 to 2055 timeline is particularly speculative for AMOC and is hashed to indicate this. Claude assisted with the visual.
Beneath the climate tipping points you can see the distribution of forecasts on a prediction market for the date of the first public Artificial General Intelligence (AGI). The estimates here are for a strong form of AGI that aces intelligence tests, has competence in robotics and passes an adversarial Turing Test. This means the AI needs to convince an intelligent judge during a two hour interrogation session that it is actually human, and then also be able to use robotic limbs to assemble a model car. Now if you don’t like prediction markets you can look at trends in software task completion time and you’ll end up with a similar timeline.
If you think that AGI is really dangerous, or will be so powerful that it will drastically lower the obstacles presented by climate change, then it makes sense to prioritize AI safety because it looks like AGI is coming faster. But there are still circumstances where climate change can pose massive, near or medium-term risk. What if AGI comes late and climate disasters come early? What if AGI comes ‘on time’ but governments throttle AI development? What if AGI isn’t revolutionary and climate change remains really hard to solve? And beyond that, we should probably think about how these two threats could interact.
To ground these questions, I want to give a quick guided tour of some of the worst tipping points that climate change could bring about.
West Antarctica Ice Sheet
Sea level rise was one of the first effects of climate change that communicators really fixated on. Given that we (now) know how little people care about distant consequences, this was probably a mistake. When people experience the noticeable effects of climate change today, they are much more likely to experience heatwaves or wildfires. Sea level rise already contributes to storm surges and places stress on coastal megacities so it’s not nothing, but it is not an existential risk in the near term.
My sense is that most people don’t realize how long the timelines are for glacial melt and sea level rise. We may lock in tipping points like melting of the West Antarctica Ice Sheet relatively early, which is bad, but the actual melting will take a *really* long time. I find it hard to imagine a situation where we have a functioning society in 2100 or 2150 but also robots are useless and AI models are still hallucinating. If that does happen then we are really going to regret not stepping up to the plate and doing more for climate change.
Coral Reef Collapse
Coral reefs are beautiful and fragile and many people rely on them to provide shelter from storm surges and to feed their families. And so I wish that climate change was not causing their degradation in such an unrelenting manner. But even if the worst comes to pass for coral reefs, I do not think it will lead to the extinction of humanity or civilizational collapse.
AMOC Collapse
I spent a couple years living in the Hudson Bay area of Canada. At 55°N, wind chills of -55°C were not uncommon. That’s the kind of cold where you can throw boiling water into the air and it dissipates into snow before it hits the ground. Stockholm is actually further north (59°N) but it does not experience the same insane temperatures because ocean currents (the Atlantic Meridional Overturning Current or AMOC in particular) act like a conveyor belt bringing tropical warmth north. But if enough fresh water from Greenland melts, it will disturb part of the conveyor belt and stop delivering warm water to Europe: “AMOC collapse”.
If this happened, all of Europe would get colder. Sunny Marseille is the same latitude as Toronto. Milan is the same latitude as Montreal. You can’t just transpose those temperatures, but this would be disastrous for crops, for energy infrastructure that is not hardened to lower temperatures, for people living in uninsulated homes, and so on. Picture frozen gas lines, frozen wind turbines, and lots of people relocating. It’s the plot of Day After Tomorrow but more regional (with some chance of fast sea level rise on the East coast of the US and Canada). As bad as it would be, it would play out slowly. “The Decade After Tomorrow”. So even if Europe started cooling with the release of ChatGPT 6, it might not really hit the bad stuff until ChatGPT 9 or 10.
Snowfall in Toronto compared to Marseille (same latitude), courtesy of weatherspark.com
This is geopolitical disaster level bad though. If you think about how Brexit consumed UK institutional capacity, AMOC collapse would be like that but far more intense and for an entire continent, for decades. The resources needed for adaptation would be massive. If the early effects were sufficiently destabilizing, and if AI progress is slow, there could be insurance market collapse or crop failure right around the time that nations are racing to make the best AI. This could really complicate efforts to cooperate. And some scientists think it could cause other tipping point cascades: shutting down the current that brings warmth to Europe might hasten the dieback of the Amazon.
Amazon Dieback
The main idea here is that the Amazon is its own aerial irrigation system. Water is taken up by the roots of trees, evapotranspiration causes clouds to form above the trees, and then those clouds move away and rain on other parts of the Amazon. If you cut or burn chunks of the forest, then areas that previously received water from trees upwind no longer have that incoming moisture. So they dry up and are more susceptible to fire, especially if background temperatures are higher from climate change. This creates a domino effect where the cycle runs on its own, regardless of human deforestation, and you end up converting the Amazon to a grassland.
Amazon dieback is an even slower disaster than AMOC collapse, taking place across decades or even over a century. It would be disastrous for South America, a loss of biodiversity like we haven’t seen since humanity caused the last big one, and would release a huge amount of carbon into the atmosphere. Maybe enough to lead to 0.1-0.2°C of temperature rise. The entire region would become considerably drier, with negative ramifications for hydropower (half of Brazil’s electricity), global food security (soy, cattle etc.) and the regional and global economy. Anything I said about how Europe would be consumed in spending all their time and money dealing with AMOC would also apply to South America.
Emergent climate catastrophe
I don’t want to underestimate climate change. One could very well imagine a disaster that was *less* horrible than Amazon dieback leading to supply chain blockages that then led to political unrest and started a war between superpowers. That could happen next year. And I haven’t even listed all the effects of climate change. More intense tropical storms or reduced crop productivity or extreme wildfires could all interact. Refugees fleeing disasters could generate political unrest and geopolitical instability and so forth.
But I think tipping points are unique in that they create a pressure to quickly ‘solve’ climate change, which might tempt us to try geoengineering.
Termination Shock
If you don’t know what termination shock is, I think the phrase itself conveys an appropriate degree of horror. The idea here is that if climate change gets sufficiently terrible, some nation or group of nations may choose to deal with it by injecting reflective particles into the atmosphere to cool the Earth (stratospheric aerosol injection). We know with fairly high certainty that it will work because volcanoes do the same thing, and we could run the whole process for an absurdly small amount of money.
One of the larger (but not the only) problems with this approach is that we would need to maintain the cooling project indefinitely, or at least until we removed enough CO₂ to compensate. Humans being what they are, you can imagine that such a program might kill our motivation to cut emissions. And so, relieved of the burden to adapt to a warming planet, background GHG levels could rise. Then, if some geopolitical disaster struck, rendering us unable to continue the stratospheric aerosol injection program, the cooling could wear off quickly and instead of returning to baseline temperature, we would shoot past those old temperatures and at a much faster rate than if there had been no cooling program in the first place. This is termination shock, and in my opinion, it is the most likely recipe for civilizational collapse that we face from climate change.
Termination shock: If we fail to reduce emissions while cooling the planet with reflective particles, and then stop the program suddenly, temperatures would increase rapidly. Temperature pathways are just illustrative.
But the minimum timeline for termination shock is decadal, not annual. If we started stratospheric aerosol injection and ran it for two years before cancelling it, the return to baseline levels wouldn’t be earth shattering. If we ran it for thirty years, weakened adaptation efforts, and then stopped the program because of some war or pandemic, that would be truly devastating. And so once again, the absolute worst of all climate-related disasters is necessarily a few decades away.
Back to Artificial Intelligence
Alright, so climate change may reduce our capacity to deal with AI safety and could exacerbate any of the geopolitical issues that AI will pose. It’s harder to reach international agreement if your diplomats are distracted by climate problems. It’s harder to focus on aligning a superintelligent AI or implementing a stronger social safety net if your policymakers, researchers, and economists are swamped by extreme heat waves, bizarre weather patterns, or desperate geoengineering schemes.
But since AGI seems more likely to come early we can also ask what that would mean for climate change. First, explosive economic growth might entail substantial emissions. Even if datacenters can run on clean electricity, and even if AGI speeds up clean tech like batteries or negative emissions technology, there are plenty of sectors that may be difficult to decarbonize. Wealthier people eat more meat, fly more, and consume more goods. It’s hard to reduce emissions from livestock, air travel, and from heavy industry. So maybe AGI neutralizes its own climate gains, or even makes it harder to meet our climate targets. The latter could be especially unfortunate if it locks in tipping points that later technological improvements are still incapable of addressing.
What this means for climate folks
If you believe, as I do, that there’s a decent chance that artificial intelligence will make the coming years extremely turbulent, then we ought to modify our current practices consistent with those beliefs.
I think we should stop calling climate change the number one problem facing humanity (or at least hedge with, “one of the top” etc.). Climate change is still important, but it is one risk in a complicated ecosystem of risks.
It would make sense to rebalance the portfolio of your work to focus a little more on climate action with near-term benefits. This could include promoting plant-based diets, reducing harmful air pollution, and strengthening adaptation approaches like cooling centers.
Consider if any of your climate knowledge might help with AI safety. Climate folks have done a lot of work on international governance, communicating risks to a lay audience and so on. We should try and share that expertise. I have another post on this topic in the works.
Here I have less actionable advice and am instead looking to provide helpful context about how climate risks could interact with AI risks in the near future. Still, it may be worth reflecting on:
The ways AI-related growth might exacerbate climate change, locking in tipping points that AI may be unable to solve.
Whether collaborations with climate experts could be fruitful. On this score, I am open to working with AI folks and hope to have a future piece discussing best practices specifically for communication on AI risk.
The IPCC on Coral reefs: “Coral reefs, for example, are projected to decline by a further 70–90% at 1.5°C (high confidence) with larger losses (>99%) at 2°C (very high confidence).” Me: Since we are expected to hit 1.5°C in the coming years, and are likely to be at 2.2-2.6°C by the end of the century I fade the bar for coral reef collapse heading into midcentury.
I failed to reproduce the exact curve from Metaculus in the first figure, and so this curve is approximated from the data but hopefully tells roughly the same story.
In 2018 the Secretary-General of the United Nations, António Guterres, said that “Climate change is the defining issue of our time”. That was a pretty reasonable thing to say in 2018, but I’m not so sure it’s reasonable now. Folks in the climate community probably need to acknowledge that AI is at least as dangerous as climate change and it’s likely to come at us much faster.
I think the timing of these two parallel threats matters. We need to know how to allocate resources, how to communicate with the public, and how to plan for the future.
To help, I’ve created a timeline. Along the top you can see the earliest onset times for some important climate tipping points. Crossing any tipping point could push us into a new, painful equilibrium that is difficult to reverse on human timescales, even if we had huge amounts of resources to throw at the problem.
Timeline details: AGI timelines are based on Metaculus question 5121 with median 2033, lower 25%= 2028 and upper 75%=2045. Climate events show when we could first expect to feel the effects of a particular tipping point. Studies differ widely; here I err on the side of giving the earliest time reported for onset of climate events, so these are aggressive timelines for AMOC and Amazon dieback. The 2037 to 2055 timeline is particularly speculative for AMOC and is hashed to indicate this. Claude assisted with the visual.
Beneath the climate tipping points you can see the distribution of forecasts on a prediction market for the date of the first public Artificial General Intelligence (AGI). The estimates here are for a strong form of AGI that aces intelligence tests, has competence in robotics and passes an adversarial Turing Test. This means the AI needs to convince an intelligent judge during a two hour interrogation session that it is actually human, and then also be able to use robotic limbs to assemble a model car. Now if you don’t like prediction markets you can look at trends in software task completion time and you’ll end up with a similar timeline.
If you think that AGI is really dangerous, or will be so powerful that it will drastically lower the obstacles presented by climate change, then it makes sense to prioritize AI safety because it looks like AGI is coming faster. But there are still circumstances where climate change can pose massive, near or medium-term risk. What if AGI comes late and climate disasters come early? What if AGI comes ‘on time’ but governments throttle AI development? What if AGI isn’t revolutionary and climate change remains really hard to solve? And beyond that, we should probably think about how these two threats could interact.
To ground these questions, I want to give a quick guided tour of some of the worst tipping points that climate change could bring about.
West Antarctica Ice Sheet
Sea level rise was one of the first effects of climate change that communicators really fixated on. Given that we (now) know how little people care about distant consequences, this was probably a mistake. When people experience the noticeable effects of climate change today, they are much more likely to experience heatwaves or wildfires. Sea level rise already contributes to storm surges and places stress on coastal megacities so it’s not nothing, but it is not an existential risk in the near term.
My sense is that most people don’t realize how long the timelines are for glacial melt and sea level rise. We may lock in tipping points like melting of the West Antarctica Ice Sheet relatively early, which is bad, but the actual melting will take a *really* long time. I find it hard to imagine a situation where we have a functioning society in 2100 or 2150 but also robots are useless and AI models are still hallucinating. If that does happen then we are really going to regret not stepping up to the plate and doing more for climate change.
Coral Reef Collapse
Coral reefs are beautiful and fragile and many people rely on them to provide shelter from storm surges and to feed their families. And so I wish that climate change was not causing their degradation in such an unrelenting manner. But even if the worst comes to pass for coral reefs, I do not think it will lead to the extinction of humanity or civilizational collapse.
AMOC Collapse
I spent a couple years living in the Hudson Bay area of Canada. At 55°N, wind chills of -55°C were not uncommon. That’s the kind of cold where you can throw boiling water into the air and it dissipates into snow before it hits the ground. Stockholm is actually further north (59°N) but it does not experience the same insane temperatures because ocean currents (the Atlantic Meridional Overturning Current or AMOC in particular) act like a conveyor belt bringing tropical warmth north. But if enough fresh water from Greenland melts, it will disturb part of the conveyor belt and stop delivering warm water to Europe: “AMOC collapse”.
If this happened, all of Europe would get colder. Sunny Marseille is the same latitude as Toronto. Milan is the same latitude as Montreal. You can’t just transpose those temperatures, but this would be disastrous for crops, for energy infrastructure that is not hardened to lower temperatures, for people living in uninsulated homes, and so on. Picture frozen gas lines, frozen wind turbines, and lots of people relocating. It’s the plot of Day After Tomorrow but more regional (with some chance of fast sea level rise on the East coast of the US and Canada). As bad as it would be, it would play out slowly. “The Decade After Tomorrow”. So even if Europe started cooling with the release of ChatGPT 6, it might not really hit the bad stuff until ChatGPT 9 or 10.
Snowfall in Toronto compared to Marseille (same latitude), courtesy of weatherspark.com
This is geopolitical disaster level bad though. If you think about how Brexit consumed UK institutional capacity, AMOC collapse would be like that but far more intense and for an entire continent, for decades. The resources needed for adaptation would be massive. If the early effects were sufficiently destabilizing, and if AI progress is slow, there could be insurance market collapse or crop failure right around the time that nations are racing to make the best AI. This could really complicate efforts to cooperate. And some scientists think it could cause other tipping point cascades: shutting down the current that brings warmth to Europe might hasten the dieback of the Amazon.
Amazon Dieback
The main idea here is that the Amazon is its own aerial irrigation system. Water is taken up by the roots of trees, evapotranspiration causes clouds to form above the trees, and then those clouds move away and rain on other parts of the Amazon. If you cut or burn chunks of the forest, then areas that previously received water from trees upwind no longer have that incoming moisture. So they dry up and are more susceptible to fire, especially if background temperatures are higher from climate change. This creates a domino effect where the cycle runs on its own, regardless of human deforestation, and you end up converting the Amazon to a grassland.
Amazon dieback is an even slower disaster than AMOC collapse, taking place across decades or even over a century. It would be disastrous for South America, a loss of biodiversity like we haven’t seen since humanity caused the last big one, and would release a huge amount of carbon into the atmosphere. Maybe enough to lead to 0.1-0.2°C of temperature rise. The entire region would become considerably drier, with negative ramifications for hydropower (half of Brazil’s electricity), global food security (soy, cattle etc.) and the regional and global economy. Anything I said about how Europe would be consumed in spending all their time and money dealing with AMOC would also apply to South America.
Emergent climate catastrophe
I don’t want to underestimate climate change. One could very well imagine a disaster that was *less* horrible than Amazon dieback leading to supply chain blockages that then led to political unrest and started a war between superpowers. That could happen next year. And I haven’t even listed all the effects of climate change. More intense tropical storms or reduced crop productivity or extreme wildfires could all interact. Refugees fleeing disasters could generate political unrest and geopolitical instability and so forth.
But I think tipping points are unique in that they create a pressure to quickly ‘solve’ climate change, which might tempt us to try geoengineering.
Termination Shock
If you don’t know what termination shock is, I think the phrase itself conveys an appropriate degree of horror. The idea here is that if climate change gets sufficiently terrible, some nation or group of nations may choose to deal with it by injecting reflective particles into the atmosphere to cool the Earth (stratospheric aerosol injection). We know with fairly high certainty that it will work because volcanoes do the same thing, and we could run the whole process for an absurdly small amount of money.
When Mt Pinatubo erupted it cooled the Earth by about 0.6°C for 15 months (photo from Wikipedia).
One of the larger (but not the only) problems with this approach is that we would need to maintain the cooling project indefinitely, or at least until we removed enough CO₂ to compensate. Humans being what they are, you can imagine that such a program might kill our motivation to cut emissions. And so, relieved of the burden to adapt to a warming planet, background GHG levels could rise. Then, if some geopolitical disaster struck, rendering us unable to continue the stratospheric aerosol injection program, the cooling could wear off quickly and instead of returning to baseline temperature, we would shoot past those old temperatures and at a much faster rate than if there had been no cooling program in the first place. This is termination shock, and in my opinion, it is the most likely recipe for civilizational collapse that we face from climate change.
Termination shock: If we fail to reduce emissions while cooling the planet with reflective particles, and then stop the program suddenly, temperatures would increase rapidly. Temperature pathways are just illustrative.
But the minimum timeline for termination shock is decadal, not annual. If we started stratospheric aerosol injection and ran it for two years before cancelling it, the return to baseline levels wouldn’t be earth shattering. If we ran it for thirty years, weakened adaptation efforts, and then stopped the program because of some war or pandemic, that would be truly devastating. And so once again, the absolute worst of all climate-related disasters is necessarily a few decades away.
Back to Artificial Intelligence
Alright, so climate change may reduce our capacity to deal with AI safety and could exacerbate any of the geopolitical issues that AI will pose. It’s harder to reach international agreement if your diplomats are distracted by climate problems. It’s harder to focus on aligning a superintelligent AI or implementing a stronger social safety net if your policymakers, researchers, and economists are swamped by extreme heat waves, bizarre weather patterns, or desperate geoengineering schemes.
But since AGI seems more likely to come early we can also ask what that would mean for climate change. First, explosive economic growth might entail substantial emissions. Even if datacenters can run on clean electricity, and even if AGI speeds up clean tech like batteries or negative emissions technology, there are plenty of sectors that may be difficult to decarbonize. Wealthier people eat more meat, fly more, and consume more goods. It’s hard to reduce emissions from livestock, air travel, and from heavy industry. So maybe AGI neutralizes its own climate gains, or even makes it harder to meet our climate targets. The latter could be especially unfortunate if it locks in tipping points that later technological improvements are still incapable of addressing.
What this means for climate folks
If you believe, as I do, that there’s a decent chance that artificial intelligence will make the coming years extremely turbulent, then we ought to modify our current practices consistent with those beliefs.
Also, please see my preprint Perspective piece, “Climate research agendas should account for anticipated AI risks” for more details.
What this means for the AI community
Here I have less actionable advice and am instead looking to provide helpful context about how climate risks could interact with AI risks in the near future. Still, it may be worth reflecting on:
Key references and some notes: