I’ll write about the content of the event in future posts, but I wanted to …]]>
I’ll write about the content of the event in future posts, but I wanted to jot down my impressions about its atmosphere, which I think was unique for an event of this kind.
I’d been nervous in the morning and early afternoon. My relaxed day of preparation was steadily eroded by things going slightly wrong, such as realising after my laptop battery ran out that I’d left my charger the other side of Bristol after Bright Club. I had a long list of papers I wanted to skim, notes I wanted to make, an introductory talk to write. Taking the train with my friend and colleague Jonty, I was too stressed to speak and had a sense of humour failure when he gently teased me about leaving my adapter somewhere (again, he said, though I can’t think which other time *cough* times he was thinking of…).
Arriving at the Science Festival site I felt more at ease. It’s like home. In the previous two years I’d come as a punter, had huge fun, loved the events and met extremely wonderful people. This was my first year as an event organiser, and as the time approached I became less nervous, in part because I couldn’t do any more preparation. It helped to take some time out for a gentle, fun radio interview by (the extremely wonderful) Timandra Harkness.
The event itself was an hour long, with introductory talks from the panel – myself, “climate agnostic” Jonathan Jones, professor of physics at the University of Oxford, and Claire Craig, science advisor in the UK government – followed by a few questions from chair Mark Lythgoe and many from the audience. The venue holds 2-300, depending on seating arrangements, and was almost sold out. After the main event we continued at the “Talking Point”, a small tent with informal seating, with around 50-70 of the audience. We took questions for, I think, another hour.
I spent more time battling with the chair than the other panel members! Mark repeatedly accused me of waffling and not answering the question. I told him I didn’t like his questions when they were ill-defined or about policy (I don’t make public statements about preferred policy options). Some of the audience questions were also a little heated, on both sides: those worrying about climate change, and those worrying about climate scientists.
Listening to @flimsin in talking point tent, sending a big virtual hug to her as she keeps getting interrupted mid explanation. #stressful -- Amanda Woodman-Hardy (Cabot Institute)
But despite this, the mood of the event was absolutely wonderful throughout. Almost all the “battles” were respectfully teasing, filled with humour. We laughed a lot, which must be a first for a discussion about climate change, scepticism and policy! I put this down to the warm and respectful relationships between the panel members (even though Claire and Jonathan had only just met), to our joyfully provocative chair, and to the audience who quickly created the serious and light atmosphere we hoped for. It was a privilege to have such an interested and supportive audience, such thoughtful, interesting and honest co-members of the panel, and a fun chair who dug into us to make us react and think more deeply about our answers.
At the end of the event, Mark stepped back from his aim of provoking us for theatre and entertainment. He thanked us very sweetly for trying so hard to answer so many questions, and said we were the bravest panel he’d ever seen. And he sent a wonderful tweet afterwards:
Best panel on climate models yet thanks to star cast fab @flimsin articulate @nmrqip wise Claire Craig #cheltscifest brave and entertaining – Mark Lythgoe (chair)
It was an enormous pleasure to put on this event, and I learned a lot. Thank you to the Cabot Institute, Bristol Environmental Risk Research Centre (BRISK), ice2sea and my department (School of Geographical Sciences at the University of Bristol) for sponsoring it, thank you to the Centre for Public Engagement for supporting me (including giving me an award! hurrah!), thank you to my panel members and chair for saying yes and working so hard, and thank you to all that came.]]>
Ice2sea is an enormous project, involving 24 institutions (such as universities and meteorological institutes), which were …]]>
Ice2sea is an enormous project, involving 24 institutions (such as universities and meteorological institutes), which were mostly in the EU but also further afield. Our aim was to improve observations and modelling of continental ice – the Greenland and Antarctic ice sheets, and the world’s glaciers and ice caps – to improve understanding about their behaviour in the recent past and projections of how they might change in the future. The main aim was to support the Intergovernmental Panel on Climate Change (IPCC) in assessing current understanding of sea level change. Ice2sea is an umbrella over many areas of research, which makes it difficult to summarise the results in a few quotes.
The reason we are in the news now is because we are having our final project meeting at the Royal Institution (Ri) in London this week. Many of the scientists involved in ice2sea, from across the EU and beyond, are gathering together to give talks on their work and discuss the results. Three of us are also giving a free public talk tomorrow at 7pm.
Publications and collaborations will continue long after the end of the project. We have a legacy of improved models, robust observations, and new scientists. I’m certainly planning to continue working with many of my ice2sea collaborators in the future.
Given that we are meeting at the Ri for three days, and the project is slowly drawing to a close, ice2sea held a press conference yesterday morning explaining what the project is, what our aims were, and what we feel we’ve achieved. Most of this focused on the benefits of this kind of project – the international collaboration, exploiting expertise from different countries, and the coordinated study of all aspects of continental ice.
We’ve produced a summary document, “From Ice To High Seas” that explains the science of continental ice and sea level to a general audience and highlights the individual publications that we’ve produced. It describes ice2sea papers that have been published in journals but also those that haven’t yet completed peer-review. So the document is a summary of our research as of April 2013, but the final results may change as these papers work their way through the system.
There are two sets of numbers from our summary document that have been highlighted by the media. I’ll explain where these come from:
“For that one scenario we have an ice sheet and glaciers contribution to sea level rise of between 3.5 and 36.8 cm by 2100,” – David Vaughan
“They concluded there was a one in 20 chance that the melting ice would drive up sea levels by more than 84 centimetres, essentially saying there’s a 95% chance it wouldn’t go above this figure.” – Matt McGrath
The first is a summary of the publications that make projections using models of the atmosphere, ocean, ice sheets and glaciers. The scenario David Vaughan (from the British Antarctic Survey, the ice2sea project leader) refers to is a “mid-range” emissions scenario, SRES A1B. Most, but not all, of these papers have completed peer-review. The list of papers relevant to that range is at the end of the post. We stated the minimum and maximum range of these projections in the summary document.
The second is from one publication (Bamber and Aspinall, 2013: listed below) that uses “expert elicitation”, a structured way of combining the judgement of a group of experts. This method deserves its own post, but here is an article about it by Willy Aspinall from the University of Bristol. This estimate is across all scenarios, not just one in particular.
So the first is the range of results from different plausible modelling options for the A1B scenario, while the second is an expert assessment of the probability of sea level rise given that the models do not incorporate every process and we may not follow the A1B emissions scenario.
Many of these papers will be included in the IPCC Fifth Assessment Report (AR5) being published this September; others will not, for example if they were not accepted by journals in time for the AR5 deadline. But this interim summary of our own work is not intended to “compete” with the IPCC, because that will assess all sea level research worldwide rather than only ice2sea results.
Not only that, but these papers are a small part of the results of ice2sea. We also have also led or contributed to major steps forwards in observing and modelling continental ice and sea level change, including:
We also have a special issue in The Cryosphere (open access).
Our focus has very much been on improving our understanding of the processes involved. This is a different approach to several recent studies that have used very simple assumptions or relationships to make projections for the future (such as extrapolation of past trends). Our work, such as our projections for Greenland glaciers (Nick et al.), shows that continental ice is complex, and can have both periods of rapid change and of relative stability. We can’t just extrapolate the past into the future. We need to understand the processes of ice, the detailed landscape of the bedrock underneath, and the potential range of changes to the atmosphere and oceans, to make physically-based projections of the future.
Update: some are asking about comparisons of our summary range with the previous IPCC projections (AR4).
First, the ice2sea headline results are the substantial improvements in models and observations, not our simple summary of the minimum and maximum projections.
Second, we should be cautious of comparing with AR4 land ice contributions for the same scenario (A1B), because those projections were from 1980-1999 to 2090-2099 while ours are for 2000-2100, and theirs are stated as a 5-95% range while ours don’t have a probability range attached.
Third, there’s another reason we can’t compare with the total sea level contributions: they combined the land ice and thermal expansion contributions using an assumption that they are independent, but we haven’t repeated this analysis.
There is a quote from David in the BBC article about our high end estimate being about 10cm higher than AR4, but this was a spur-of-the-moment, back-of-the-envelope comparison (in response to a press request) with the best-known AR4 range which includes all scenarios, just to provide some context. It’s not meant as a definitive, like-for-like comparison. Even if he had said about 17cm higher, the difference for the A1B land ice contributions, it would still have been a ballpark figure because of the different definitions of the contribution. That is why we haven’t made that comparison in the summary document.
In short, do wait for the AR5 in September for consensus sea level projections and how they compare with AR4. For ice2sea, please have a look at individual publications, and the summary document, but bear in mind that we are not trying to do the job of the IPCC: rather we have spent considerable effort trying to make their job easier, by providing science reported in individual publications.
Bamber and Aspinall (2013) “An expert judgement assessment of future sea level rise from the ice sheets”
Barrand et al. (2013), “Computing the volume response of the Antarctic Peninsula ice sheet to warming scenarios to 2200″
Fettweis et al. (2013), “Estimating the Greenland ice sheet surface mass balance contribution to future sea level rise using the regional atmospheric climate model MAR”
Fürst et al., in review.
Giesen and Oerlemans (2013) “Climate-model induced differences in the 21st century global and regional glacier contributions to sea-level rise”
Goelzer et al. (2013), “Sensitivity of Greenland ice sheet projections to model formulations”
Payne et al., in review.]]>
Ever since I started scientific research, I’ve been fascinated by uncertainty. By the limits of our knowledge. It’s not something we’re exposed to much at school, where Science is often equated with Facts and not, as it should be, with Doubt and Questioning.
My job as a climate scientist is to judge these limits, our confidence, in predictions of climate change and its impacts, and to communicate them clearly.
A scientist is never certain. We all know that. We know that all our statements are approximate statements with different degrees of certainty; that when a statement is made, the question is not whether it is true or false but rather how likely it is to be true or false. People search for certainty. But there is no certainty.
– Richard Feynman
We can’t avoid scientific uncertainty, because we can’t perfectly measure or understand the universe. So we need to be very clear about what we know, what we don’t know, and the surprises we might face.
It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty…I’m talking about…bending over backwards to show how you’re maybe wrong
– Richard Feynman
We also need to think about how we make decisions in the face of uncertainty.
My fellow panel members will be Jonathan Jones, professor of physics at Oxford University and “climate agnostic”, and Claire Craig, science advisor in the UK government. I’m going to be talking with them about the strengths and weaknesses of models of climate change and its impacts (for example, on sea level), and about how we can make sense of their predictions in the wider world.
Have the models improved over the past decades? How do we test them? What can we say with confidence about the future, and what is still deeply uncertain? How we do we deal with scientific uncertainty when we need to make decisions? What do we need from climate scientists, and are climate models useful? What is the place of scientific evidence alongside everything else we need to consider in our society?
I’m looking forward to a discussion that might at times be “heated”, but seeks common ground on ways to improve climate science and the way it is used. A grown-up discussion. If you can get to Cheltenham on the 7th June, please join us.
The final big question in this series is:
How do we predict…
The final big question in this series is:
How do we predict our future?
Everything I’ve discussed so far has been about how to describe the earth system with a computer model. How about us? We affect the earth. How do we predict what our political decisions will be? How much energy we’ll use, from which sources? How we’ll use the land? What new technologies will emerge?
We can’t, of course.
Instead we make climate predictions for a set of different possible futures. Different storylines we might face.
Here is a graph of past world population, and the United Nations predictions for the future using four possible fertility rates.
The United Nations aren’t trying to answer the question, “What will the world population be in the future?”. That depends on future fertility rates, which are impossible to predict. Instead they ask “What would the population be if fertility rates were to stay the same? Or decrease a little, or a lot?”
We do the same for climate change: not “How will the climate change?” but “How would the climate change if greenhouse gas emissions were to keep increasing in the same way? Or decrease a little, or a lot?” We make predictions for different scenarios.
Here is a set of scenarios for carbon dioxide (CO2) emissions. It also shows past emissions at the lefthand side.
These scenarios are named “SRES” after the Special Report on Emissions Scenarios in 2000 that defined the stories behind them. For example, the A1 storyline is
“rapid and successful economic development, in which regional average income per capita converge – current distinctions between “poor” and “rich” countries eventually dissolve”,
and within it the
A1F1 A1FI scenario is the most fossil-fuel intensive. The B1 storyline has
“a high level of environmental and social consciousness combined with a globally coherent approach to a more sustainable development”,
though it doesn’t include specific political action to reduce human-caused climate change. The scenarios describe CO2 and other greenhouse gases, and other industrial emissions (such as sulphur dioxide) that affect climate.
We make climate change predictions for each scenario; endings to each story. Here are the predictions of temperature.
Each broad line is an estimate of a conditional probability: the probability of a temperature increase, given a particular scenario of emissions.
Often this kind of prediction is called a projection to mean it is a “what would happen if” not a “what will happen”. But people do use the two interchangeably, and trying to explain the difference is what got me called disingenuous and Clintonesque.
We make projections to help distinguish the effects of our possible choices. There is uncertainty about these effects, shown by the width of each line, so the projections can overlap. For example, the highest temperatures for the B1 storyline (environmental, sustainable) are not too different from the lowest temperatures for
A1F1 A1FI (rapid development, fossil intensive). Our twin aims are to try to account for every uncertainty, and to try to reduce them, to make these projections the most reliable and useful they can be.
A brief aside: the approach to ‘possible futures’ is now changing. It’s a long and slightly tortuous chain to go from storyline to scenario, then to industrial emissions (the rate we put gases into the atmosphere), and on to atmospheric concentrations (the amount that stays in the atmosphere without being, for example, absorbed by the oceans or used by plants). So there has been a move to skip straight to the last step. SRES are being replaced with Representative Concentration Pathways (“RCP”).
The physicist Niels Bohr helpfully pointed out that
“Prediction is very difficult, especially if it’s about the future.”
And the ever-wise Douglas Adams added
“Trying to predict the future is a mug’s game.”
They’re right. Making predictions about what will happen to our planet is impossible. Making projections about what might happen, if we take different actions, is difficult, for all the reasons I’ve discussed in this series of posts. But I hope, as I said in the first, I’ve convinced you it is not an entirely crazy idea.
In the previous post I said there will always be limits to our scientific understanding …]]>
In the previous post I said there will always be limits to our scientific understanding and computing power, which means that “all models are wrong.” But it’s not as pessimistic as this quote from George Box seems, because there’s a second half: “… but some are useful.” A model doesn’t have to be perfect to be useful. The hard part is assessing whether a model is a good tool for the job. So the question for this post is:
How do we assess the usefulness of a climate model?
I’ll begin with another question: what does a spam (junk email) filter have in common with state-of-the-art predictions of climate change?
The answer is they both improve with “Bayesian learning”. Here is a photo of the grave of the Reverend Thomas Bayes, which I took after a meeting at the Royal Statistical Society (gratuitous plug of our related new book, “Risk and Uncertainty Assessment for Natural Hazards”):
Bayesian learning starts with a first guess of a probability. A junk email filter has a first guess of the probability of whether an email is spam or not, based on keywords I won’t repeat here. Then you make some observations, by clicking “Junk” or “Not Junk” for different emails. The filter combines the observations with the first guess to make a better prediction. Over time, a spam filter gets better at predicting the probability that an email is spam: it learns.
The filter combines the first guess and observations using a simple mathematical equation called Bayes’ theorem. This describes how you calculate a “conditional probability”, a probability of one thing given something else. Here this is the probability that a new email is spam, given your observations of previous emails. The initial guess is called the “prior” (first) probability, and the new guess after comparing with observations is called the “posterior” (afterwards) probability.
The same equation is used in many state-of-the-art climate predictions. We use a climate model to make a first guess at the probability of future temperature changes. One of the most common approaches for this is to make predictions using many different plausible values of the model parameters (control dials): each “version” of the model gives a slightly different prediction, which we count up to make a probability distribution. Ideally we would compare this initial guess with observations, but unfortunately these aren’t available without (a) waiting a long time, or (b) inventing a time machine. Instead, we also use the climate model to “predict” something we already know, to make a first guess at the probability of something in the past, such as temperature changes from the year 1850 to the present. All the predictions of the future have a twin “prediction of the past”.
We take observations of past temperature changes – weather records – and combine them with the first guess from the climate model using Bayes’ theorem. The way this works is that we test which versions of the model from the first guess (prior probability) of the past are most like the observations: which are the most useful. We then apply those “lessons” by giving these the most prominence, the greatest weight, in our new prediction (posterior probability) of the future. This doesn’t guarantee our prediction will be correct, but it does mean it will be better because it uses evidence we have about the past.
Here’s a graph of two predictions of the probability of a future temperature change (for our purposes it doesn’t matter what) from the UK Climate Projections:
The red curve (prior) is the first guess, made by trying different parameter values in a climate model. The predicted most probable value is a warming of about three degrees Celsius. After including evidence from observations with Bayes’ theorem, the prediction is updated to give the dark blue curve (posterior). In this example the most probable temperature change is the same, but the narrower shape reflects a higher predicted probability for that value.
Probability in this Bayesian approach means “belief” about the most probable thing to happen. That sounds strange, because we think of science as objective. One way to think about it is the probability of something happening in the future versus the probability of something that happened in the past. In the coin flipping test, three heads came up out of four. That’s the past probability, the frequency of how often it happened. What about the next coin toss? Based on the available evidence – if you don’t think the coin is biased, and you don’t think I’m trying to bias the toss – you might predict that the probability of another head is 50%. That’s your belief about what is most probable, given the available evidence.
My use of the word belief might trigger accusations that climate predictions are a matter of faith. But Bayes’ theorem and the interpretation of “probability” as “belief” are not only used in many other areas of science, they are thought by some to describe the entire scientific method. Scientists make a first guess about an uncertain world, collect evidence, and combine these together to update their understanding and predictions. There’s even evidence to suggest that human brains are Bayesian: that we use Bayesian learning when we process information and respond to it.
The next post will be the last in the introductory series on big questions in climate modelling: how can we predict our future?]]>
My sincere apologies for the delays in posting and moderating. Moving house took much more time and …]]>
My sincere apologies for the delays in posting and moderating. Moving house took much more time and energy than I expected. Normal service resumes.
I’d also like to mark the very recent passing of George Box, who was the eminent and important statistician to whom I owe the name of my blog, which forms a core part of my scientific values. The ripples of his work and philosophy travelled very far. My condolences and very best wishes to his family.
The question I asked at the end of the last post was:
“Can we ever have a perfect reality simulator?”
I showed a model simulation with pixels (called “grid boxes” or “grid cells”) a few kilometres across: in other words, big. Pixel size, also known as resolution, is limited by available computing power. If we had infinite computing power how well could we do? Imagine we could build a climate model representing the entire “earth system” – atmosphere, oceans, ice sheets and glaciers, vegetation and so on – with pixels a metre across, or a centimetre. Pixels the size of an atom. If we could do all those calculations, crunch all those numbers, could we have a perfect simulator of reality?
I’m so certain of the answer to this question, I named my blog after it.
A major difficulty with trying to simulate the earth system is that we can’t take it to pieces to see how it works. Climate modellers are usually physicists by training, and our instinct when trying to understand a thing is to isolate sections of it, or to simplify and abstract it. But we have limited success if we try to look at isolated parts of the planet, because everything interacts with everything else, and difficulties with simplifications, because important things happen at every scale in time and space. We need to know a bit about everything. This is one of my favourite things about the job, and one of the most difficult.
For a perfect simulation of reality, we would need perfect understanding of every physical, chemical and biological process – every interaction and feedback, every cause and effect. We are indeed improving climate models as time goes on. In the 1960s, the first weather and climate models simulated atmospheric circulation, but other important parts of the earth system such as the oceans, clouds, and carbon cycle were either included in very simple ways (for example, staying fixed rather than changing through time) or left out completely. Through the decades we have developed the models, “adding processes”, aiming to make them better simulators of reality.
But there will always be processes we think are important but don’t understand well, and processes that happen on scales smaller than the pixel size, or faster than the model “timestep” (how often calculations are done, like the frame rate of a film). We include these, wherever possible, in simplified form. This is known as parameterisation.
Parameterisation is a key part of climate modelling uncertainty, and the reason for much of the disagreement between predictions. It is the lesser of two evils when it comes to simulating important processes: the other being to ignore them. Parameterisations are designed using observations, theoretical knowledge, and studies using very high resolution models.
For example, clouds are much smaller than the pixels of most climate models. Here is the land map from a lower resolution climate model than the one in the last post, called HadCM3.
If each model pixel could only show “cloud” or “not cloud”, then a simulation of cloud cover would be very unrealistic: a low resolution, blocky map where each block of cloud is tens or even hundreds of kilometres across. We would rather each model pixel was covered in a percentage of cloud, rather than 0% or 100%. The simplest way to do this is to relate percentage cloud to percentage relative humidity: at 100% relative humidity, the pixel is 100% covered in cloud; as relative humidity decreases, so does cloud cover.
Parameterisations are not Laws of Nature. In a sense they are Laws of Models, designed by us wherever we do not know, or cannot use, laws of nature. Instead of “physical constants” that we measure in the real world, like the speed of light, they have “parameters” that we control. In the cloud example, there is a control dial for the lowest relative humidity at which cloud can form. This critical threshold doesn’t exist in real life, because the world is not made of giant boxes. Some parameters are equivalent to things that exist, but for the most part they are “unphysical constants”.
The developers of a model play god, or at least play a car radio, by twiddling these control dials until they pick up the climate signal: in other words, they test different values of the parameters to find the best possible simulation of the real world. For climate models, the test is usually to reproduce the changes of the last hundred and fifty years or so, but sometimes to reproduce older climates such as the last ice age. For models of Greenland and Antarctica, we only have detailed observations from the last twenty years.
As our understanding improves and our computing power increases, we replace the parameterisations with physical processes. But we will never have perfect understanding of everything, nor infinite computing power to calculate it all. Parameterisation is a necessary evil. We can never have a perfect reality simulator, and all models are… imperfect.
In case you do lie awake worrying that the entire universe is a simulation: it’s fine, we can probably check that.
The second question I want to discuss is this:
How can we do scientific experiments on our planet?…
The second question I want to discuss is this:
How can we do scientific experiments on our planet?
In other words, how do we even do climate science? Here is the great, charismatic physicist Richard Feynman, describing the scientific method in one minute:
If you can’t watch this charming video, here’s my transcript:
“Now I’m going to discuss how we would look for a new law. In general, we look for a new law by the following process:
First, we guess it.
Then we — no, don’t laugh, that’s the real truth — then we compute the consequences of the guess to see what, if this is right, if this law that we guessed is right, we see what it would imply.
And then we compare the computation result to nature, or we say compare to experiment, or experience, compare it directly with observations to see if it works.
If it disagrees with experiment, it’s wrong. In that simple statement is the key to science. It doesn’t make a difference how beautiful your guess is, it doesn’t make a difference how smart you are, who made the guess, or what his name is — if it disagrees with experiment, it’s wrong. That’s all there is to it.”
What is the “experiment” in climate science? We don’t have a mini-Earth in a laboratory to play with. We are changing things on the Earth, by farming, building, and putting industrial emissions into the atmosphere, but it’s not done in a systematic and rigorous way. It’s not a controlled experiment. So we might justifiably wonder how we even do climate science.
Climate science is not the only science that can’t do controlled experiments of the whole system being studied. Astrophysics is another: we do not explode stars on a lab bench. Feynman said that we can compare with experience and observations. We would prefer to experience and observe things we can control, because it is much easier to draw conclusions from the results. Instead we can only watch as nature acts.
What is the “guess” in climate science? These are the climate models. A model is just a representation of a thing (I wrote more about this here). A climate model is a computer program that represents the whole planet, or part of it.* It’s not very different to a computer game like Civilisation or SimCity, in which you have a world to play with, in which you can tear up forests and build cities. In a climate model we can do much the same: replace forests with cities, alter the greenhouse gas concentrations, let off volcanoes, change the energy reaching us from the sun, move the continents. The model produces a simulation of how the world responds to those changes: how they affect temperature, rainfall, ocean circulation, the ice in Antarctica, and so on.
How do they work? The general idea is to stuff as much science as possible into them without making them too slow to use. At the heart of them are basic laws of physics, like Newton’s laws of motion and the laws of thermodynamics. Over the past decades we’ve added more to them: not just physics but also chemistry, such as the reactions between gases in the atmosphere; biological processes, like photosynthesis; and geology, like volcanoes. The most complicated climate models are extremely slow. Even on supercomputers it can take many weeks or months to get the results.
Here is a state-of-the-art simulation of the Earth by NASA.
The video shows the simulated patterns of air circulation, such as the northern hemisphere polar jet stream, then patterns of ocean circulation, such as the Gulf Stream. The atmosphere and ocean models used to make this simulation are high resolution: they have a lot of pixels so, just like in a digital camera, they show a lot of detail.
A horizontal slice through this atmosphere model has 360 x 540 pixels, or 0.2 megapixels. That’s about two thirds as many as a VGA display (introduced by IBM in 1987) or the earliest consumer digital camera (the Apple QuickTake from 1994). It’s also about the same resolution as my blog banner. The ocean model is a lot higher resolution: 1080 x 2160 pixels, or 2.3 megapixels, which is about the same as high definition TV. The video above has had some extra processing to smooth the pixels out and draw the arrows.
I think it’s quite beautiful. It also seems to be very realistic, a convincing argument that we can simulate the Earth successfully. But the important question is: how successfully? This is the subject of my next post:
Can we ever have a perfect “reality simulator”?
The clue’s in the name of the blog…
See you next time.
* I use the term climate model broadly here, covering any models that describe part of the planet. Many have more specific names, such as “ice sheet model” for Antarctica.
I must be, because I’ve been avoiding writing this post for some time, when previously I’ve been so excited to blog I’ve written until the early hours of the morning.
I’m a climate scientist in the UK. I’m …]]>
I must be, because I’ve been avoiding writing this post for some time, when previously I’ve been so excited to blog I’ve written until the early hours of the morning.
I’m a climate scientist in the UK. I’m quite early in my career: I’ve worked in climate science for six and a half years since finishing my PhD in physics. I’m not a lecturer or a professor, I’m a researcher with time-limited funding. And in the past year or so I’ve spent a lot of time talking about climate science on Twitter, my blog and in the comments sections of a climate sceptic blog.
So far I’ve been called a moron, a grant-grubber, disingenuous, and Clintonesque (they weren’t a fan: they meant hair-splitting), and I’ve had my honesty and scientific objectivity questioned. I’ve been told I’m making a serious error, a “big, big mistake”, that my words will be misunderstood and misused, and that I have been irritating in imposing my views on others. You might think these insults and criticisms were all from climate sceptics disparaging my work, but those in the second sentence are from a professor in climate change impacts and a climate activist. While dipping my toes in the waters of online climate science discussion, I seem to have been bitten by fish with, er, many different views.
I’m very grateful to PLOS for inviting me to blog about climate science, but it exposes me to a much bigger audience. Will I be attacked by big climate sceptic bloggers? Will I be deluged by insults in the comments, or unpleasant emails, from those who want me to tell a different story about climate change? More worryingly for my career, will I be seen by other climate scientists as an uppity young (ahem, youngish) thing, disrespectful or plain wrong about other people’s research? (Most worrying: will anyone return here to read my posts?)
I’m being a little melodramatic. But in the past year I’ve thought a lot about Fear. Like many, I sometimes find myself with imposter syndrome, the fear of being found out as incompetent, which is “commonly associated with academics”. But I’ve also been heartened by recent blog posts encouraging us to face fears of creating, and of being criticised, such as this by Gia Milinovich (a bit sweary):
“You have to face your fears and insecurity and doubt. [...] That’s scary. That’s terrifying. But doing it will make you feel alive.”
Fear is a common reaction to climate change itself. A couple of days ago I had a message from an old friend that asked “How long until we’re all doomed then?” It was tongue-in-cheek, but there are many that are genuinely fearful. Some parts of the media emphasise worst case scenarios and catastrophic implications, whether from a desire to sell papers or out of genuine concern about the impacts of climate change. Some others emphasise the best case scenarios, reassuring us that everything will be fine, whether from a desire to sell papers or out of genuine concern and frustration about the difficulties of tackling climate change.
Never mind fear: it can all be overwhelming, confusing, repetitive. You might want to turn the page, to change the channel. Sometimes I’m the same.
I started blogging to try and find a new way of talking about climate science. The title of my blog is taken from a quote by a statistician:
“essentially, all models are wrong, but some are useful” - George E. P. Box (b 1919)
By “model” I mean any computer software that aims to simulate the Earth’s climate, or parts of the planet (such as forests and crops, or the Antarctic ice sheet), which we use to try to understand and predict climate changes and their impacts in the past and future. These models can never be perfect; we must always keep this in mind. On the other hand, these imperfections do not mean they are useless. The important thing is to understand their strengths and limitations.
I want to focus on the process, the way we make climate predictions, which can seem mysterious to many (including me, until about a month before starting my first job). I don’t want to try and convince you that all the predictions are doom and gloom, or conversely that everything is fine. Instead I want to tackle some of the tricky scientific questions head-on. How can we even try to predict the future of our planet? How confident are we about these predictions, and why? What could we do differently?
When people hear what I do, one of the first questions they ask is often this:
“How can we predict climate change in a hundred years, when we can’t even predict the weather in two weeks?”
To answer this question we need to define the difference between climate and weather. Here’s a good analogy I heard recently, from J. Marshall Shepherd
“Weather is like your mood. Climate is like your personality.”
And another from John Kennedy:
“Practically speaking: weather’s how you choose an outfit, climate’s how you choose your wardrobe.”
Climate, then, is long-term weather. More precisely, climate is the probability of different types of weather.
Why is it so different to predict those two things? I’m going to toss a coin four times in a row. Before I start, I want you to predict what the four coin tosses are going to be: something like “heads, tails, heads, tails”. If you get it right, you win the coin*. Ready?
[ four virtual coin tosses...]
[ ...result is tails, tails, tails, heads ]
Did you get it right? I’m a nice person, so I’m going to give you another chance. I’m going to ask: how many heads in the next four?
[ four more virtual coin tosses... ]
[ ...results is two heads out of four ]
The first of these is like predicting weather, and the second like climate. Weather is a sequence of day-by-day events, like the sequence of heads and tails. (In fact, predicting a short sequence of weather is a little easier than predicting coin tosses, because the weather tomorrow is often similar to today). Climate is the probability of different types of weather, like the probability of getting heads.
If everything stays the same, then the further you go into the future, the harder it is to predict an exact sequence and the easier it is to predict a probability. As I’ll talk about in later posts, everything is not staying the same… But hopefully this shows that trying to predict climate is not an entirely crazy idea in the way that the original question suggests.
My blog posts here at PLOS will be about common questions and misunderstandings in climate science, topical climate science news, and my own research. They won’t be about policy or what actions we should take. I will maintain my old blog allmodelsarewrong.com: all posts at PLOS will also be mirrored there, and some additional posts that are particularly technical or personal might only be posted there.
At my old blog we’ve had interesting discussions between people from across the spectrum of views, and I hope to continue that here. To aid this I have a firm commenting policy:
I’m extremely happy to support PLOS in their commitments to make science accessible to all and to strengthen the scientific process by publishing repeat studies and negative results. I’m also very grateful to everyone that has supported and encouraged me over the past year: climate scientists and sceptics, bloggers and Tweeters. Thank you all.
And thank for you reading. My next post will be about another big question in climate science:
How can we do scientific experiments on our planet?
See you next time.
* You don’t, but if you were a volunteer at one of my talks you would.]]>