How Do Climate Models Work?

Intro to Modelling

In the 1950s, humanity’s computers became powerful enough to solve complex mathematical equations using ultimately simple arithmetic, something known as numerical computing. It took about a decade, but scientists scaled up the approach from basic weather predictions to simulations of the entire global weather and climate systems. In modern approaches, this is done by dividing the atmosphere, ocean, and land into some large number of discrete cells and propagating solutions forward through time and space. At first, scientists like Wally Broecker and Mikhail Budyko used energy balance models to predict the Surface Air or Global Mean Surface Temperature (GMST) over the span of decades. Their models were parameterized on factors like climate sensitivity, the warming as a function of CO2 ppm doubling (a simple linear slope) and predicted radiative forcing, the amount the net energy change in the global system.

In the late 1980s, NASA researchers lead by James Hansen juiced the climate models by discretizing the Earth and was finally able to sound some alarm bells in the US Congress about the matter of a warming Earth. Global powers created the Intergovernmental Panel on Climate Change (IPCC) as a panel of scientists representing both their disciplines and their nations. For three decades, the IPCC has functioned as the authority on synthesizing and organizing global research to assess the risks, impacts, and possible modes of prevention of global climate change.  

The Model Reality

The statistician’s adage “All models are wrong, some are useful” holds ever true here. It is true that many of the climate models that have been cited in the past 50+ years have made erroneous predictions. And there is no shortage of publications peddling the most extreme predictions on climate change, followed by some of the most egregious instances cherry picking data. Hansen’s claims, in particular, have been nitpicked to death. There is also, unfortunately, no shortage of Al Gore spouting utter alarmist predictions as well.

Despite the alternative facts from Real Climate Science, Hansen gets so much right

The truth is, most past climate model projections have been skillful in predicting Global Mean Surface Temperature for years after publication and many continue to be quite accurate almost half a century later. This is particularly impressive considering that when the first models were developed in the 1970s, the world was thought by many authorities on the subject to be cooling.

The skill of previous climate model projections has become well understood and models continue to improve over time, due to a combination of increased compute resources, model sophistication, and greater observational capabilities. Below is a summary of the difference in the rate of warming for a model (or set of models) and NASA’s global temperature records.

Table from Carbon Brief analysis

Five different observational temperature time series representing surface air temperatures are often used for evaluating model skill. Since the 1970s, roughly 20 complete climate models for GMST have been developed in the community, including the efforts undertaken by the IPCC. There are two factors influencing the performance of the GMST projections: the physical factors, such as sensitivity to CO2 output, and the radiative forcing projections, such as greenhouse gas emissions in the interim.

What Do Models Get Right?

It is trivial to look at a model and assess how accurate the warming rate has been for a given set of years, as I have shown in the table above. However, this fails to evaluate the predictive mechanisms of the model, which is the relationship between radiative forcing and temperature change: if the projected rate of CO2 emissions is incorrect and the model is otherwise accurate, the model will appear to be wrong.

In order to understand the predictive skill of climate models, two metrics can be useful. The change in temperature over a time interval and the change in temperature for a given CO2 increase. Both metrics show models are quite skillful at predicting current warming rates, even those developed 50+ years ago. 14 of the 17 are within 95% confidence intervals of observations. This is particularly impressive when you consider that many of these models are biased high by failing to consider non-linear radiative effects and ocean heat uptake.

Comparison of trends in temperature versus time (top panel) and implied TCR (bottom panel) between observations and models over the model projection periods displayed at the bottom of the figure. Hausfather, 2019.

In particular, Hansen’s models are shown to overestimate the subsequent warning by over 50%, but this mismatch is almost entirely due to overestimating methane output.

What do models miss?

To be honest, not much when looking at GMST broadly. Many of the 1970s models show a rate of warming as a function of CO2 (Implied TCR) at the high end, due to their simplification that the atmosphere reaches equilibrium with radiative forcing instantly [1]. In RS71, the Implied TCR is anomalously low, which is due to an underestimation of the “climate sensitivity” parameter, or the degrees by which the global temperature rises for a doubling of atmospheric CO2 (currently accepted to be 3C).

Newer, improved models are done by IPCC as part of the Coupled Model Intercomparison Project, which uses a statistical blend of climate models from different groups around the world. These models run future greenhouse gas concentration scenarios, known as Representative Concentration Pathways, several of which were established by Soviet researcher Mikhail Budyko who pioneered the Earth Energy Balance method in the 60s. Grouping models together is intended to improve predictions robustness to factors that are not considered in certain models leading to systemic error, such as previous models’ tendency to underestimate Arctic warming. Not included in the assessment above, Budyko’s own early models have proven to be astonishingly skillful, including regarding Article ice [2].

Budyko’s models still hold up, from EOS

From Model to Monitor

The 20 or some-odd models that humans have generated in the previous 50 years have not only been skillful at predictive future temperatures, they have been quite useful as well. US chemist’s work in the 70s lead to the curbing of Chlorofluorocarbons via the Montreal Protocol, a famously successful international agreement. Hansen’s 1988 Congressional testimony of climate change is well-known for raising broad awareness (and controversy, of course) and the IPCC was founded by the UN that same year, which is to date, the sole coordinator of international efforts to assess climate change.

Constraints

Though the IPCC continues to put out assessment reports with more sophisticated models, the increase in complexity has been hard fought. In the 90s the average resolution was 300 km and today it between 100-50 km, but each halving requires an order of magnitude greater compute. High performance computing (HCP) for weather systems is already a significant use for such systems globally and today models are being developed that simulated on the 1 km scale for ~10 days. These models are capable of representing complex interactions between vegetation and soil carbon, marine ecosystems and ocean currents, and human activities which can then be piped into larger models to predict local climate instabilities and extremes over longer periods.

I think the demand for weather simulations is a good case study here. Regional weather models now run at about 10-1km resolutions, needed to predict convection and rainfall, having improved rapidly since the 90s. Predictions of weather events on the timeframe of days are extremely valuable, particularly for energy markets, and the US market has grown around a 9% CAGR in the past decade or so. NOAA has found that a 3-6% variability in US GDP can be attributed to weather while the average number of $1bn weather events in the US 2008-2015 doubled compared to the previous 35 years [3].

Next Generation Climate Models are Economical

Besides exploring novel architectures for HCP, the trick to circumventing the computationally demanding constraints of next generation climate models is focusing on leveraging the massive influx in high-fidelity data from earth-observation satellites, which are in the middle of experiencing a renaissance due to both exponential launch cost reductions and electronics miniaturization. I wrote about it in this post. The amount of this Earth Observation (EO) data doubles in less than every two years now yet much of it is un-utilized due to lack of continuity and validation. Data is messy.

Using these massive data sets for feeding high fidelity simulations on the order of 10-1 km is a flywheel for overcoming the hurdle of computational constraints. In much the same way as local weather simulations, there is a large and growing market for these simulations in the context of extreme weather events. EO data can be used for high sensitivity simulation of coastal sea level rises, flooding in delta areas or agriculturally-heavy regions in both developed and developing countries, predicting phytoplankton blooms and marine ecosystem health. These are all massively economically disruptive events, on the order of billions. Plus, the satellite data comes in effectively real time so errors can be fed in with low latency to models. Satellite-informed models give water vapor, soil moisture, evapotranspiration rates, surface water, ice, and snow quantities at better fidelities by the month – these data will allow us to industrially monitor flood and drought risks. Satellite data companies like Descartes Labs are even managing anti-deforestation through real-time monitoring products. This data is accessible and it powers valuable engines – the cost of the compute can become an afterthought.

The Climate Reality

The 6th Assessment Report (AR6) by the IPCC was released this year and offered a look at the pathways to Net Zero Emissions by 2050, the goal of most countries. As of this year, a 10% increase in emissions is projected by 2030 compared to the 45% originally projected as required for Net Zero 2050. Seven countries account for 50% of all emissions, though 35 countries have peaked in their emissions. Below, a graphic from AR6 visualizes the representative concentration pathways up until 2050. RCP 4.5 is considered the scenario that no climate policy is enacted while RCP 7.0 has long been considered the baseline outcome.

Global surface temperature increase since 1850–1900 (OC) as a function of cumulative CO₂ emissions (GtCO₂), AR6

Some Context on Targets

It is worth noting that Net Zero 2050 is based upon the constraint that global mean surface temperatures are not raised above 1.5°C above pre-industrial levels. The origin of this number is economist William Nordhaus’ suggestion that 2°C  above pre-industrial was the maximum global condition any human civilization had experienced in the past ~200,000 years [4]. Hansen’s commentary in the 80s got the number down 0.5 degrees.

The common misconception that the world will end or something like it if the GMST goes beyond 1.5°C is not helpful, especially since we are currently on track to miss that target. There’s also the unfortunate truth that the number is ultimately somewhat arbitrary. Current policies have use around 2.8°C by 2099. I predict it will eventually be revised upwards if we continue to miss global emissions targets. Humans are a crafty bunch and society is perfectly capable of continuing at several degrees above pre-industrial levels, though many, many plants, animals, and people will perish. Post-industrial human activity has already wiped out 50%+ of all animal populations since the 1970s. Numbers like this, not that the Earth is 1°C warmer, could be front and center.

How To Use Climate Models

A far more constructive way to frame the race to decarbonize is then not some arbitrary number that we will continue to change, but rather how many people/plants/animals are predicted to die for a given emission level or rate and where. We know that many of these people, for one, will be those in flood-prone or agriculturally dependent regions susceptible to large disruptions from flood and drought without the economic resources to offset. Air pollution in dense industrial areas will continue to kill more than seven million people per year. Our models and targets should be tooled to predict these outcomes with ever increasingly precision thanks to EO data and supercomputers – not only is there a market for it, but by doing so, our predictive models become tangibly useful in saving lives from global extremes. Or at least, become exceptionally clear about the cost.


Augmentation

[1] The temperature of the Earth does not respond instantaneously to radiative forcing. This “Climate inertia” is a result of primarily the heat capacity of the ocean and many models up until Hansen in the 80s ignored it. The heat capacity of water is also a problem as water vapor in the atmosphere creates a positive feedback loop in the heating of the planet (the Clausius-Clapeyron relationship) — roughly 1°C  degree increase in temperature allows 6-7% more water vapor in the atmosphere.

[2] Budyko’s methods were apparently quite rudimentary, but the paper is in Russian and I haven’t found a free English version yet.

[3] While it seems like the numbers in this study might be a little arbitrary, bottom line is that the number of these hugely costly weather events is going up. Just ask the Texas power grid.

[4] Economists don’t usually get things right.

Leave a comment

Create a website or blog at WordPress.com

Design a site like this with WordPress.com
Get started