Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS All Models Are Wrong

Tuning to the climate signal

This is part 3 of a series of introductory posts about the principles of climate modelling. Others in the series: 1 | 2

My sincere apologies for the delays in posting and moderating. Moving house took much more time and energy than I expected. Normal service resumes.

I’d also like to mark the very recent passing of George Box, who was the eminent and important statistician to whom I owe the name of my blog, which forms a core part of my scientific values. The ripples of his work and philosophy travelled very far. My condolences and very best wishes to his family.

The question I asked at the end of the last post was:

“Can we ever have a perfect reality simulator?”

I showed a model simulation with pixels (called “grid boxes” or “grid cells”) a few kilometres across: in other words, big. Pixel size, also known as resolution, is limited by available computing power. If we had infinite computing power how well could we do? Imagine we could build a climate model representing the entire “earth system” – atmosphere, oceans, ice sheets and glaciers, vegetation and so on – with pixels a metre across, or a centimetre. Pixels the size of an atom. If we could do all those calculations, crunch all those numbers, could we have a perfect simulator of reality?

I’m so certain of the answer to this question, I named my blog after it.

A major difficulty with trying to simulate the earth system is that we can’t take it to pieces to see how it works. Climate modellers are usually physicists by training, and our instinct when trying to understand a thing is to isolate sections of it, or to simplify and abstract it. But we have limited success if we try to look at isolated parts of the planet, because everything interacts with everything else, and difficulties with simplifications, because important things happen at every scale in time and space. We need to know a bit about everything. This is one of my favourite things about the job, and one of the most difficult.

For a perfect simulation of reality, we would need perfect understanding of every physical, chemical and biological process – every interaction and feedback, every cause and effect. We are indeed improving climate models as time goes on. In the 1960s, the first weather and climate models simulated atmospheric circulation, but other important parts of the earth system such as the oceans, clouds, and carbon cycle were either included in very simple ways (for example, staying fixed rather than changing through time) or left out completely. Through the decades we have developed the models, “adding processes”, aiming to make them better simulators of reality.

But there will always be processes we think are important but don’t understand well, and processes that happen on scales smaller than the pixel size, or faster than the model “timestep” (how often calculations are done, like the frame rate of a film). We include these, wherever possible, in simplified form. This is known as parameterisation.

Parameterisation is a key part of climate modelling uncertainty, and the reason for much of the disagreement between predictions. It is the lesser of two evils when it comes to simulating important processes: the other being to ignore them. Parameterisations are designed using observations, theoretical knowledge, and studies using very high resolution models.

For example, clouds are much smaller than the pixels of most climate models. Here is the land map from a lower resolution climate model than the one in the last post, called HadCM3.

Land-sea map for UK Met Office Unified Model HadCM3

If each model pixel could only show “cloud” or “not cloud”,  then a simulation of cloud cover would be very unrealistic: a low resolution, blocky map where each block of cloud is tens or even hundreds of kilometres across. We would rather each model pixel was covered in a percentage of cloud, rather than 0% or 100%. The simplest way to do this is to relate percentage cloud to percentage relative humidity: at 100% relative humidity, the pixel is 100% covered in cloud; as relative humidity decreases, so does cloud cover.

Parameterisations are not Laws of Nature. In a sense they are Laws of Models, designed by us wherever we do not know, or cannot use, laws of nature. Instead of “physical constants” that we measure in the real world, like the speed of light, they have “parameters” that we control. In the cloud example, there is a control dial for the lowest relative humidity at which cloud can form. This critical threshold doesn’t exist in real life, because the world is not made of giant boxes. Some parameters are equivalent to things that exist, but for the most part they are “unphysical constants”.

The developers of a model play god, or at least play a car radio, by twiddling these control dials until they pick up the climate signal: in other words, they test different values of the parameters to find the best possible simulation of the real world. For climate models, the test is usually to reproduce the changes of the last hundred and fifty years or so, but sometimes to reproduce older climates such as the last ice age. For models of Greenland and Antarctica, we only have detailed observations from the last twenty years.

As our understanding improves and our computing power increases, we replace the parameterisations with physical processes. But we will never have perfect understanding of everything, nor infinite computing power to calculate it all. Parameterisation is a necessary evil. We can never have a perfect reality simulator, and all models are… imperfect.

In case you do lie awake worrying that the entire universe is a simulation: it’s fine, we can probably check that.

 

 

Discussion
  1. Hi Tamsin, nice article, thanks.
    As I understand it, the issue at the current capacity of supercomputers isn’t just one of resolution, but the number of iterations of the Navier-Stokes equations used to calculate the outcome of the interactions of climatic elements too. The better the gridscale resolution, the more truncated the iterations of the N-S equations, and vise versa.

    Has anyone estimated how much more powerful computers need to be before they can satisfy both these competing resource demands such that models become not just more useful, but sufficiently useful for practical purposes such as ‘pretty reliable’ 5-14 day forecasting?

  2. Hi Roger,

    Are you asking about temporal resolution, as opposed to spatial? And (as you will well know), reliable forecasts also depend on having good observations.

    Tamsin

  3. Dr. Edwards — that seemed like a non responsive response. If those are the two possibilities you see you, IMO, could easily have answered both or selected one. IMO that would have been more useful.

  4. Hi,

    I have a question about pixel size and coastline paradox.
    What is the size of earth surface ?
    If the earth is flat (in models ?), it doesn’t matter, but there are vertical structures (valleys, walls in a street, trees) that exist or not with pixel size. See figure 1.4 chapter 1 p. 113 in AR4 for an illustration.
    Does it matter for radiative exchange between earth surface and atmosphere ?
    Does earth surface (the solid one) temperature or energy flux change with pixel size ?
    Same questions for conduction.
    Response needed for entertainment only.

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top