For three decades, climate governance has been our most visible experiment in managing a slow-burn, civilisation-scale risk. It has taught us what it means to coordinate internationally under uncertainty, to act when the costs of delay are high, and to build institutions for a threat that unfolds unevenly across space and time. Yet, when it comes to artificial intelligence – a technology advancing far faster than our governance systems can track – we seem strangely unwilling to draw on this experience. As someone working at the intersection of climate governance, international justice, and foresight, I am struck by how seldom these domains speak to one another, and continue to encourage dialogue. We are behaving as though we have never encountered a complex, high-stakes global risk before.
Climate communities have spent years refining the intellectual tools needed to think across decades, weigh systemic feedback loops, recognise lock-in dynamics, and anticipate tipping points. It is therefore surprising that the parallels with AGI governance – a field facing its own non-linearities, existential and tipping risks, and potential irreversibility – remain so rarely articulated. Where connections are made, they are often partial or diverted into adjacent conversations. A recent WIREs article promisingly draws parallels between AI and climate change, highlighting that both issues challenge democratic accountability, and compress decision cycles beyond the rhythms of existing governance. It identifies “immediate and severe” risks to representation, accountability, and trust – all familiar concerns in climate adaptation debates. But the discussion quickly veers toward hypothetical authoritarian responses, rather than engaging with the deeper structural parallels: these include the need to govern under uncertainty, the difficulty of coordinating across borders, and the risk that accelerating harms will outpace institutional learning. Other analyses start similarly well but drift – such as a law blog on “AI and Corporate Climate Governance” that opens by framing AI and climate change as twin transformations, but settles into a discussion of AI’s carbon footprint (which is still important!) rather than its global-risk properties. Of course, AI is indeed driving unprecedented growth in energy demand: according to the International Energy Agency, global data centre electricity consumption will reach around 945 TWh by 2030.
Still, more promising work on the climate parallel exists. The UNU Institute for Environment and Human Security explicitly argues that emerging AI governance frameworks can learn from the mechanisms built for climate action – particularly how climate governance evolved once adaptation was reframed as a cross-sector challenge, rather than a niche environmental issue. This aligns with the widely accepted belief that international governance only accelerates once risks are framed across multiple domains – such as economy, security, infrastructure – and institutional linkages are activated.
There is also a growing recognition that AI and solar geoengineering share governance challenges. Both are global-scale technologies with profound scientific uncertainty, asymmetric incentives, and the risk of unilateral deployment. Geoengineering has long been haunted by the “governance-gap paradox” – the need for regulation before technical feasibility is fully proven. Without anticipatory rules, a small number of actors could force a planetary transition. This is precisely the fear now rising in frontier AI. Solar geoengineering startups are now entering a commercial “take-off” phase without adequate governance, demonstrating how quickly oversight frameworks can become obsolete once investment accelerates. The lesson is straightforward: the moment frontier-scale technologies attract capital, as they have done, the window for responsible governance narrows rapidly. If we wait until highly capable models are deployed across infrastructure systems, the possibilities for effective governance shrink dramatically. This is why calls for pre-deployment licensing, capability forecasting, and international coordination are not alarmist – they are overdue.
Even the strongest comparative analyses to date still underestimate the tempo of AI risk. One recent study categorises AI impacts as “intermittent” and “non-linear,” labels AI as a “sectoral” risk rather than a collective one, and describes its economic stakes as “low to medium.” This framing does not capture the reality emerging today. Autonomous cyber-operations, AI-enabled biological design assistance, AI-driven drone swarms deployed in active conflict, and documented cases of deceptive model behaviour are not hypothetical long-term outcomes — they already belong in the short-term column.
A simple comparison makes the point clear. In climate governance, short-term harms include extreme weather; medium-term harms include ecosystem degradation and biodiversity loss; long-term harms include ocean-current collapse or permafrost thaw. For AI, the short-term column already contains biased automated decision-making and AI-driven cyberattacks; the medium-term column includes the concentration of power and pervasive AI surveillance; the long-term column contains misaligned, unsafe advanced systems that could act beyond human control. Both trajectories involve cascading risks and feedback loops; the difference is that one unfolds over generations, and the other may compress decades of risk into a few training cycles by 2027.
Why does this matter for climate communities? Because many of the governance challenges they have spent years addressing – scientific uncertainty, uneven risk distribution, free-rider problems, incentive misalignment, and political inertia – are resurfacing almost identically in the AI debate. Climate activists understand how windows of opportunity open and close, how early decisions lock in structural disadvantages, how bifurcated responses affect collective action, and how global commons problems demand international coordination. These intuitions map seamlessly onto frontier-AI governance, even though the communities working on these issues rarely intersect.
The deeper parallel, however, is psychological. Climate change long appeared too abstract and too slow-moving to demand aggressive early action, and only became politically unavoidable once harms were visible. AI risk, conversely, moves at a pace that denies policymakers the time needed to form new instincts. The lesson is not that climate and AI risks are the same, but that slow recognition is as dangerous as slow response. When the scientific community finally accepted the evidence of ozone depletion, governments acted with unusual speed and the 1987 Montréal Protocol was negotiated within two years, establishing a stabilisation period before irreversible damage occurred. Its success demonstrates that precaution, taken early enough, can avert worst-case futures even under profound uncertainty. Our current inability to forecast the capabilities of the next generation of AI systems is therefore not a reason to wait; it is perhaps the strongest case for precaution before thresholds are crossed.
My own work — across ecovillage networks, diplomacy, forecasting, and international governance — increasingly shows that these conversations should not be siloed. Climate communities understand precaution, interdependence, tipping dynamics, and irreversible risk. Peace communities understand de-escalation, treaties, and global coordination. Foresight communities understand signal detection, path-dependence, and systemic uncertainty. AI governance, by contrast, and mainly driven by just a few thousand passionate professionals, often appears to stand alone, despite relying on familiar concepts. This is a missed opportunity. Climate activists and environmental organisations are natural allies in the governance of frontier AI. They know what it means to confront a risk evolving faster than political appetite, and they understand the consequences of waiting too long.
We do not have the luxury of decades to build a governance architecture for AGI. If climate governance teaches us anything, it is that waiting for harms to be undeniable is waiting too long. The lesson is not to treat AGI identically to climate change – the technologies and timescales differ – but that we have hard-won governance instincts that can be transferred. The window for precaution in climate governance was measured in decades, and even then, progress faltered, but the window for AGI may be measured in years. If we allow these governance challenges to remain disconnected, we risk repeating – at far greater speed – the failures we now regret in climate action. The parallel is not only analytically useful; it may be our best guide for avoiding the same mistakes.
Josephine Schwab is UK Country Representative to Global Ecovillage Network, a Senior Research Fellow (European Security) with the European Institute of Policy Research and Human Rights, a former Veritas Forecasting Research Fellow with The Midas Project, an international justice news writer, and author of “Diplomacy in the Age of AGI”, featured by Futures4Europe, the European Commission’s foresight community platform.