Seeing Antarctica’s future more clearly

Are you a details person?

Do you love to lose yourself in little things? To read every footnote of a book, watch ants in a patch of grass,  memorise every mole on a lover’s skin?

I’m undecided. I often fall into the rabbit hole of nitty gritty, and nit picking. Other times I enjoy the hazy view of the bigger picture in life, ideas, and work.

When it comes to Antarctica, details matter. That’s because the ice sheet is super-sensitive to the lumps and bumps in the bed that lies beneath it, whether it is hard bedrock or soft sediment, and how slippery or rough.

Turns out the vast, ancient Antarctic ice sheet is restless like the Princess and the Pea.

The wrinkles and soft patches in Antarctica’s bed affect how the edges of the ice react to changes around it in the ocean and air, especially when the bed lies underwater (as it does for much of West Antarctica). If warm waters approach the coast, melting the ice from underneath, the edge of the ice can either retreat inland – potentially adding to sea level rise – or else hold firm. It all depends on the small-scale features of the bed. In this coldest of places, the devil is in the detail.

A study published this week – led by my friend and colleague Steph Cornford, with contributions by me and many others – predicts the future of Antarctica with greater detail, for a larger part of the continent, than ever before. That means we think they’re the most reliable predictions yet for how much sea levels might rise in future, in response to human-caused and natural climate change.

It all comes down to the ice sheet model (a description in computer code of how ice flows and changes) and in particular its ‘resolution’: how small the pixels are. Just like a digital camera, the finer the resolution, the greater the clarity with which we see the pictures.

In this study the smallest pixels are 250 metres across. A human can jump that far.


One of the simulations of the Amundsen Sea Embayment, showing Pine Island Glacier (left) and Thwaites Glacier (middle). The colours are the speed of the ice (red = fastest, light blue = slowest). Dark blue is the ocean, and the patches of red on the coastline are ice shelves. It shows a prediction of how the ice sheet might respond to melting from warm water for a business-as-usual scenario. It doesn't include the effects of increased snowfall in a warmer world, which would compensate some of the ice losses.

A simulation of the Amundsen Sea Embayment, showing Pine Island Glacier (left) and Thwaites Glacier (middle). Dark blue is the ocean, and the other colours show the speed of the ice (red = fastest, blue = slowest). The light blue line divides ice resting on the bed from floating ice, and the patches of ice along the coastline are ice shelves. This a prediction of how the ice sheet might respond to melting from warm water for a business-as-usual scenario. Click to play.

This extra clarity comes at a price: we can only simulate part of the ice sheet at a time, because the calculations are so slow. The study looks at four parts of the West Antarctic ice sheet: above is a simulation of the Amundsen Sea Embayment, with the two great ice streams Pine Island Glacier and Thwaites Glacier. The large dividing lines are 128 km (80 miles) apart, roughly the distance from Bristol to Reading, so that’s about how wide the mouth of Thwaites Glacier is now.

Last year we saw more evidence that this coastline is changing fast. The glaciers are losing ice; the thin blue line that divides ice resting on the bed from floating ice is retreating inland. They’re showing signs of instability, which means they might keep losing ice for some time. We’re not completely sure if the trigger was natural or man-made, but we know they are contributing to sea level rise, and that human-caused warming could make this worse. And we think other areas of West Antarctica might be vulnerable too, because they also lie on an underwater bed.

In the ice sheet interior the pixels (faint grey boxes) are fairly large (4 km, about the width of Manhattan) but at the coastline — where the interesting stuff is happening — they are so tiny the lines are a blur. This is called an ‘adaptive mesh': the pixel size adapts in real time, picking out more detail where the changes are happening fastest.

This means the ice sheet model can capture far more detail about the valleys in the bed that guide ice towards the ocean, the downward slopes that make the ice sheet unstable, and the spikes and hills that snag and drag the ice sheet and stop or slow the retreat. All these crucially determine how sensitive the ice sheet – and therefore sea level – is to global warming.

As co-author Dan Martin put it, it’s akin to transforming a blur into a flock of birds.

That’s why the study is special. What did we find?

We looked at two scenarios of human activity – business-as-usual (called A1B) or strong reductions in greenhouse gas emissions (E1) – and the results were quite surprising. Strangely the second scenario seems to have more sea level rise.

That’s because there’s a balancing act between the loss and gain of ice. While ice can be lost from Antarctica because the coastline retreats inland, at the same time warmer air means more snowfall. This adds ice, compensating for some of the retreat.

The melting by the ocean is similar in both scenarios, so they lose about the same amount of ice from retreat. But in the cooler E1 scenario there is much less snowfall to compensate than in the warmer A1B.

So in this study mitigation gives a couple of centimetres sea level contribution from Antarctica by 2100, and five or more by 2200; business-as-usual gives about a centimetre by 2100, and one to five by 2200.

Inevitably – as I often write about on this blog – there are uncertainties in these predictions. Two important ones come from climate models: the predictions of snowfall and melting by the ocean, which give the range of results at 2200 for A1B.

More importantly still, there is uncertainty about what Antarctica is doing today. Snowfall is hard to measure, especially over large areas of an inaccessible continent, so it also comes from modelling.  Starting off with a different plausible version of today’s snowfall means the ice sheet model reacts very differently: Thwaites Glacier is much more unstable, and contributes two more centimetres by 2100 and ten more by 2200. That’s the simulation above.

So that flock of birds is still a little fuzzy. But we hopefully now have greater clarity about which parts of our climate matter most to West Antarctica, the restless Princess at the bottom of the world.


N.B. Sorry for closing comments on this post – I’m on holiday so can’t moderate or reply.

Category: antarctica, ice2sea, icesheetmodels, news, plainenglish, sealevel | 2 Comments

Climate Change By Numbers

Happy New Year (er…)!

Sorry for the total lack of posts since September – I’ve been busy settling into my new lectureship at The Open University, a UK distance learning university based in Milton Keynes. I’ve joined the research group of scientists such as Neil EdwardsMark Brandon and Vince Gauci who work on the environment, earth and ecosystems. I’m also enjoying learning how to teach students from all educational backgrounds (Open by name, Open by nature) based around the world.

I’ve also been busy being one of three Scientific Consultants (capitals to denote Officialness) for a new BBC program about climate science called Climate Change By Numbers. It will be broadcast in the UK on the 2nd March (BBC4, 9-10:15 pm) [Update: and repeated on the 3rd at 2:25am and 5th at 10pm] and available in the UK on iPlayer after that. I believe that a two part version will also be shown on BBC Worldwide at some point. Doug McNeall is another of the consultants (the third is not on social media).

I’ve closed comments on this post because (a) I’m a bit too busy to moderate right now and (b) I can’t talk about the content before broadcast anyway – they want people to watch it! But do watch this space for another post soon after broadcast, where I do hope to be able to answer people’s questions.

Here are the trailer and press release:

BBC Four explores the science behind three key climate change statistics

In a special film for BBC Four, three mathematicians will explore three key statistics linked to climate change.

In Climate Change by Numbers, Dr Hannah Fry, Prof Norman Fenton and Prof David Spiegelhalter hone in on three numbers that lie at the heart of science’s current struggle to get a handle on the precise processes and impact of climate global climate change.

Prof Norman Fenton said: “My work on this programme has revealed the massive complexity of climate models and the novel challenges this poses for making statistical predictions from them.”

The three numbers are:

  •  0.85 degrees – the amount of warming the planet has undergone since 1880
  • 95% – the degree of certainty climate scientists have that at least half the recent warming is man-made
  • one trillion tonnes – the cumulative amount of carbon that can be burnt, ever, if the planet is to stay below ‘dangerous levels’ of climate change

All three numbers come from the most recent set of reports from the Intergovernmental Panel on Climate Change.

Prof David Spiegelhalter said: “It’s been eye-opening to find out what these important numbers are actually based on.”

In this programme, the three scientists unpack what the history of these three numbers are; where did they come from? How have they been measured? How confident can we be in their accuracy? In their journeys they drill into the very heart of how science itself works, from data collection, through testing theories and making predictions, giving us a unique perspective  on the past, present and future of our changing climate.

Cassian Harrison, Channel Editor BBC Four, said: “This 75 minute special takes a whole new perspective on the issue of climate change. It puts aside the politics to concentrate on the science. It offers no definitive answers, but it does show the extraordinary achievements and the challenges still facing scientists who are attempting to get a definitive answer to what are perhaps the biggest scientific questions currently facing mankind.”

Executive Producer Jonathan Renouf said: “Who would have thought there’d be a link between the navigation system used to put men on the moon, and the way scientists work out how much the planet is warming up? It’s been great fun to come at climate change from a fresh angle, and discover stories that I don’t think anyone will have heard before.”


Category: media, news, scicomm | Comments Off on Climate Change By Numbers

How to love uncertainty in climate science

This is the script of my TEDxCERN talk, a 12-13 minute talk I did from memory. When the video is put online in a week or so, you’ll be able to follow along and see where I fluffed it improvised. A shorter version appeared on Vice News under the headline “There Is Some Uncertainty in Climate Science — And That’s a Good Thing”.

I used to be a particle physicist. Sadly, I left before it became cool to be a particle physicist.

Here’s one of the collisions I observed for my PhD at Fermilab:


And in that previous life, the stringent criteria for being “certain” about a new discovery, like the Higgs boson that made headlines at CERN, is the 5 sigma confidence level.

Here’s the famous “bell curve”, with the sigma levels shown along the bottom:


You can see that 5 sigma is way out in the tails, with a very, very low probability of occurring by chance. It means we think there is only a one in 3.5 million chance that the signal could have been seen if there no Higgs.

Now that I’m a climate scientist, I dream of such certainty! We’re studying an enormously complex planet. I’m going to talk about some of the reasons there’s uncertainty in climate science, some of the problems that’s causing between science and society, and what I think we can do about it.

I’ll start with some things we’re certain about. The earth’s energy budget is out of balance: there’s more energy going in than coming out, so the planet is storing it up. That’s not unusual in itself, only that we are helping tip the scales. The extra energy means the atmosphere and the surface of the ocean have warmed, making the hottest days warmer and more frequent, and the coldest days less frequent.

As the oceans heat up they expand, and ice on land – in glaciers, and the Greenland and Antarctic ice sheets – has also been melting into the oceans, and breaking off in ice bergs, faster than it has been replaced by new snow. So global average sea level has risen. We’re confident our activities have been the dominant cause of warming since the middle of the last century.

How about the future? We predict more of the same. We’re confident the world will get warmer, shifting the hottest and coldest days further, and that rainfall will become heavier in some places, such as the wet tropical regions. Global average sea level will continue rising, making the extreme highs in sea level higher and more frequent.

But it’s not only climate scientists that are certain. Not everyone knows this, but more and more climate sceptics agree with us too. Yes, there are people who don’t believe CO2 is a greenhouse gas, and likely never will. But in my experience many sceptics in the blogosphere, media and politics absolutely agree we are having an effect on climate. They question only the details, such as how fast that warming will be, or how we should reduce the risks.

How do we know what we’re talking about? The big picture predictions come from our observations of the planet and our fundamental physical understanding, some of which is 200 years old. But the details – exactly how fast temperatures and sea levels will rise, and which parts of the world will experience heavier rainfall – must come from computer models.

Here’s a map of the world in a climate model:

Map of land and oceans for UK Met Office Unified Model HadCM3 (at atmospheric resolution)

The model’s about 15 years old, but it’s still used. You can see the world has been simplified: it looks blocky, almost like a very early digital camera.

We need to use computer models because we don’t have a miniature earth to play with. It’s not only climate science with this difficulty. If you want to study, say, the evolution of galaxies, it’s a bit easier to write computer code than to create a hundred million stars… At the heart of climate models are basic laws of physics, like Newton’s laws of motion, and over time we’ve added more and more physics, chemistry, biology and geology.

But a model can never be perfect: it is by definition a simplified representation of reality. There’s a great saying by this statistician George Box, who sadly died last year: “All models are wrong, but some are useful”. I think this is so important I named my blog after it.

Not only are all models simplified, but their predictions partly depend on the numbers you plug in. And we can’t always know what those numbers should be: say, if they’re hard to measure in the real world. So there’s uncertainty because of the simplifications and unknown inputs of our computer models.

A second reason is that the very definition of climate has uncertainty at its heart. People often think of weather and climate as the same thing, but they’re not. Weather is the state of the atmosphere: the temperatures, rainfall and pressures we can measure with instruments. Climate is different. We can think of climate as “the probability of different types of weather occurring”.

The fact that climate is a statement of probability means two important things. First, that climate is inherently uncertain. A probability is a statement of uncertainty. “We predict the weather will mostly be X, sometimes Y and occasionally Z”.

Second, it means that climate is a long-term thing. That’s because to estimate a probability you need a lot of data. If you were flipping a coin to see if it’s fair, 50:50 heads or tails, you’d have to do it a lot of times before you could be sure. In the same way we need around 30 years or more of weather records to get just one data point of climate. So that fact that climate is a probability means that it’s uncertain and that we need a lot of data to test our models.

My research focuses on both sources of uncertainty I’ve mentioned: the limitations of computer models, and testing their predictions.

Uncertainty doesn’t mean we don’t know anything. In fact I’d argue that uncertainty is the engine of science, because it drives our search to understand the universe. But misunderstanding and misrepresentation of uncertainty is damaging the relationship between climate science, the media and society, because climate science is both complex and highly politicised.

The first problem uncertainty brings is the extra difficulty for the expert in explaining their results, and the non-expert in understanding them. For example, over the past 17 years or so there has been a slowdown, even a pause, in the rate of warming of the atmosphere. We’re confident the climate is still changing, because the ocean is still warming, the land losing ice, sea level rising, and we predict the atmosphere will start to warm again after this temporary blip.

We think there are several contributing factors to this pause, including a change in the movement of heat around the planet, a dip in the brightness of the sun, reflection of the sun by pollution and volcanic eruptions. But because we need to use computer models to understand it, and because 17 years is not that long when it comes to climate, we don’t know the exact contributions of each. Clearly this is not simple, sound bite science.

The second problem is that scientists in any area of cutting edge research will disagree with each other. If the media or public don’t expect that it can cause confusion, and, worse still, because climate science is politicised, these disagreements are often sold as proof of “unreliable science”, an argument to ignore scientists until it’s all “sorted out”.

For example, some scientists predict global average sea level rise under the highest greenhouse gas emissions scenario will likely be 20 to 30 inches by the end of the century. Others predict it will very likely be 3 to 5 feet, or possibly over 6 feet. That’s a big difference! The reason is that the two groups look at the problem in quite different ways – the first use methods based in physics, the second statistics. That’s an interesting story to tell, because we don’t yet know the best approach.

We might like to think of science as a neat, orderly book of facts, but really it’s like searching for the right path in a fog. It takes time to find out which was the right one.

The third problem is that scientific uncertainty allows people to spin our results. We had a press conference for project I worked in called ice2sea, which made predictions of global average sea level rise using the physics-based methods. Some journalists reported our results as “Sea level rise to be less severe than feared” because they compared us with those higher, statistical studies. Other journalists reported the same press conference as “Risk from rising sea levels worse than feared”, because they chose to compare us to the previous report of the Intergovernmental Panel on Climate Change which, like us, used the physics methods, but didn’t tally every possible part of sea level rise. One website chose to go with “The End of London as we know it…”.

It’s no wonder the public are confused. Every media outlet tells the story it wants to tell.

But we as scientists haven’t always helped. We haven’t always sold the idea of uncertainty as not only inevitable but even exciting, and we’ve sometimes over-simplified our communication. That pause in warming of the atmosphere surprised the media and public, even though scientists always expected this kind of thing could happen in the short term. That’s because we focused too much on talking about the long-term average predictions, which smooth out the year-to-year changes.

We’ve also done a bad job at being available. How many climate scientists can you name? Where do you get your climate science from: interviews with scientists? More likely the media, politicians, and activists, whether environmental or sceptical. We’ve mostly kept our head below the parapet, for fear of attracting fire in communicating complex science in a politicised atmosphere. There are certainly days I hide away. But we need to be braver.

Things are changing. There are more of us online than ever, giving interviews and talks, and trying to explain the nuances of the science.

But we can do better. My colleagues Ed Hawkins, Doug McNeall and I wrote a journal article about the slowdown in atmospheric warming called “Pause for thought”, which we’re proud to say was the first journal article to use Twitter handles for the author contact information. We called on our colleagues to join us online and in the media, because the more that do, the easier it gets: the less we have to speak outside our comfort zone, and the more we can support each other.

I’d especially love more female scientists to get out there. In our society, men are often rewarded for being competitive. I like to think if there were more women involved, it might help naturally move things from a climate debate to a climate conversation.

And for that conversation I’d like to invite you, to the public, to come and find us. There are now hundreds of climate scientists on Twitter, and the small number of us that blog is growing. But we’re mostly engaging with those who are already passionate, the environmentalists and dissenters. We’d like to talk to more in the middle ground: the fence-sitters, the understandably confused.

So I’m curating a Twitter list of climate scientists: a directory of active researchers – physical scientists, computer scientists and statisticians – who are studying climate change and its impacts. You can find it from my Twitter profile, flimsin.

So far I’ve added 250 climate scientists. If you’re a climate scientist, or know one, tweet me to add to the list. And if you’d like to ask a climate scientist a question – to discuss a news article, or explain their results – then just read through the biographies and find some scientists to ask.

I hope this list will grow, and start conversations that help us deal better with uncertainty in climate science – perhaps even with the messy business of science itself. So if you’re confused about climate … puzzled about the pause … surprised about sea level … or just uncertain about uncertainty … please come and find us. We’d love to talk.

With enormous thanks to TED coach Michael Weitz and Head of TEDxCERN Claudia Marcelloni De Oliveira for helping me make this talk more accessible and clear. Thanks to Vice News for suggestions to improve the readability of my article, and to Jonty Rougier, Ed Hawkins and Doug McNeall for useful comments and encouragement.

Category: boxquote, climatemodels, ice2sea, introductory, sealevel, statistics, uncertainty | 21 Comments

Open to positive feedback

I’m extremely happy to say that on the 1st October I’ll be taking up a lectureship at the Open University!

I’ll be sad to leave Bristol. I’ve spent all my years as a climate scientist there, ever since those running the NERC project PalaeoQUMP took a punt on a particle physicist. I’ve been very supported in my science and my public engagement, particularly by my boss for the last few years, Tony Payne, all those who run the Cabot Institute, and public engagement guru and incredible cheerleader for those who do it, Kathy Sykes. I’ll deeply miss regular coffees and cakes with one of the finest humans I know, Jonty Rougier. I’ll feel unsettled without my amazing network of friends, collaborators, and interesting-idea-swappers across the university – from drug use and mental health, to risk of volcanic eruptions, to philosophy of mathematics and politics. And there are many other very lovely friends: thank you for listening, helping, and being very cheering.

But I’m incredibly excited about the move. It’s not just that it’s a – cough – permanent position, or the relief of “making it” as an academic (for now…). It’s that the Open University makes education available to many of those with obstacles to campus learning, such as families, jobs, ill health and disabilities, or prison sentences. It also has a commitment to the wider public, working with the BBC to make wonderful programmes such as Frozen Planet (Principal Academic Advisor Mark Brandon) and making a lot of material free online (OpenLearn and FutureLearn). Unusually, they also put their money where their mouth is when it comes to multidisciplinarity. I’m excited to join Mark, Neil Edwards (no relation – I’m not an Edwards by blood), Joe Smith and others, continuing my research in model uncertainty for future sea level and past climates.

While trying to finish up in Bristol, I’ve been trying to say no to most new requests, particularly public engagement. But one came along I couldn’t say no to (thanks for recommending me, Jon Buttterworth). I’m very excited to be giving a TEDx CERN talk next month. There’ll be a live audience (tickets all gone) and webstreaming, and the talks will also be put online afterwards. The line up looks fantastic, Brian’s hosting, and I’m pleased to finally visit CERN…

Looking for a job this year has contributed to my long, er, hiatus in blogging (as well as my terrible response time to emails…I’m so sorry). Here’s a post I began writing back in January, about our paper on the Greenland ice sheet. It was published in The Cryosphere, one of the Copernicus Publications stable of open access journals that have transparent peer review. I love this public discussion approach, and usually sign my name to my reviews. Like PLOS, like the Open University, I’m a fan of opening everything up.


This post is about a result we published early this year from the ice2sea project, one of two companion papers asking: how might the interactions between the shape of the Greenland ice sheet and its local climate affect sea level rise?

Greenland is losing ice. But, as I’ve written before, this is mostly because it is always losing ice. At the same time it is also gaining it. Permanent ice sheets and glaciers are not really permanent, because these inputs and outputs change each year and season. So the size of the Greenland ice sheet is a matter of accounting. Its contribution to sea level depends on whether the books are balanced: ice in minus ice out.

On the surface, ice is added when snow compacts, or rain and meltwater freeze, and ice is removed when it melts without refreezing (the water runs off into the ocean), or sublimates. This part of the ice budget is called the surface mass balance. (The other part of the budget is ice being lost “dynamically” at the edge of the ice sheet, where ice bergs calve into the oceans, but I won’t discuss that in this post).

Surface mass balance depends on altitude. If you are high, as you are in the middle of the ice sheet, on average more ice is created than lost (the accumulation zone, where the net “ice income” is positive). If you are below about 1 km elevation, along the edges of the ice sheet, more ice is lost than gained (the ablation zone, where the net is negative). Here is an elevation map of the Greenland ice sheet as represented by a regional climate model called MAR (pixel size 25 km x 25km):

Ice sheet surface elevation in the regional climate model MAR. The red line is 77 degrees North. [Figure 1(a) from Edwards et al. (2014a), The Cryosphere].

Ice sheet surface elevation in the regional climate model MAR. The red dashed line is 77 degrees North. [Figure 1(a) from Edwards et al. (2014a), The Cryosphere].

What happens if we warm the air above the ice, as we predict will happen in the future? More melting, so ice is lost and the surface becomes lower. Being lower means being warmer, because of the temperature lapse rate (how temperature changes with altitude). Warmer means more melting, which means…more ice loss. It’s a positive feedback loop. Here ‘positive feedback’ doesn’t mean ‘lovely praise from your colleagues’. It means amplifying. The effect of the original climate change on sea level is increased.

But when the ice sheet changes shape it also affects precipitation (snow and rainfall). The precipitation part of the feedback is more complex than the temperature part because it can act in two opposing ways. A lower surface can mean less precipitation, because the air does not rise and cool as much, which is another positive feedback. It can also mean more precipitation, because the air is warmer and so holds more moisture, which can add ice, countering part of the effect of the original climate change on sea level.

So we’re interested in the combined effect of the temperature and precipitation changes. Does this “surface mass balance elevation” feedback amplify the Greenland contribution to sea level rise compared with climate change alone? How confident are we about this?

In other words, how much is the surface mass balance affected by the shape of the ice sheet as it responds to climate change? Can we express this in a simple way, and quantify the change per metre in height? We decided to call this number the SMB lapse rate: SMB for surface mass balance, and lapse rate in analogy with…temperature lapse rate. If it’s a positive number, it’s a positive feedback; the larger the number, the larger the amplification of sea level rise.

In this study, we needed to estimate the SMB lapse rate in the regional climate model MAR, not in the real world. That’s because we wanted to use it as a simple way of connecting MAR (which has a simplified ice sheet) with an ice sheet model. It adds in the feedback loop between surface mass balance, simulated by MAR, and ice sheet shape, simulated by the ice sheet model. So we needed it to represent the modelled feedback, to be self-consistent.

It would also be possible to connect the two models by joining up, or ‘coupling’ their code, but this is quite time-consuming and tricky to do. It also reduces the speed of simulation to the lowest common denominator, which is the (veeery slooooow) regional climate model. That’s very limiting, when the ice sheet model is super fast.

Our method means we can simulate the climate once, and then use the fast coupling to connect this climate simulation with the ice sheet model later. Or…with another ice sheet model. Or a group of five ice sheet models, as we did in the companion paper. In other words, it means we can assess the uncertainty from using different ice sheet models, or different settings of those ice sheet models, without the need to run the very slow climate model again and again.

It’s not the first time someone has quantified the SMB lapse rate for a regional climate model. But it is the first time anyone (a) studied the full probability distribution of the lapse rate (b) used it in ice sheet model projections (companion paper) to test how much it affects future sea level rise.

By ‘full probability distribution’ I mean the following kinds of shapes.

First guess (a.k.a.

First guess (a.k.a. “prior”, light grey) and updated (“posterior”, dark grey) distributions for one of our SMB lapse rates. (Ablation zone, south of 77 degN latitude). [Figure 7(a) from Edwards et al. (2014a), The Cryosphere].

These histograms are rough estimates of probability distributions for a MAR SMB lapse rate. (The curves are smoother estimates). I say “a”, not “the”, SMB lapse rate because we estimate different ones for the north and south of Greenland, and for the accumulation and ablation areas.

The light grey histogram is our “first guess”, or prior, distribution for an SMB lapse rate. (It’s the one for the ablation zones along the south Greenland coasts). The units are kilograms of ice per metre cubed per year*. The values come from changing the ice sheet elevation in MAR and seeing the effect on SMB. Each value is from a single MAR pixel, calculated by dividing the change in SMB by the change in elevation. It shows that in some areas the SMB lapse rate is positive – the net ice budget decreases as you go lower, as you’d expect from the temperature part of the feedback – but in others it is negative, showing the complexity of the precipitation part.

The dark grey histogram is our “updated”, or posterior, distribution. Normally in a Bayesian analysis this update means “in the light of new observations”. Here we are trying to estimate the SMB lapse rate in MAR, not the real world, so the “observations” are actually another MAR simulation. We have used each value in the light grey histogram to predict the SMB that MAR would simulate, if we changed the ice sheet height**. And we compare this with the SMB that MAR does simulate, when we do change it! (It is less easy to do this with the real Greenland ice sheet). The better the match, the greater the weight we give to the SMB lapse rate value.

This updating has the effect of ‘squeezing’ the distribution, giving higher probability to the best values (around 2 kg per metre cubed per year for this example). Our results shows that overall, when trying to represent the kind of ice sheet changes MAR predicts in the future, the positive SMB lapse rates are most successful: the feedback amplifies sea level rise. These results apply to MAR, but others have found similar results for other climate models.

The nice thing about this Bayesian approach – trying to estimate probability distributions in this way – is that it gives you a fuller picture. Instead of just one value with an error bar, it gives you a shape. Our distributions happened to be quite symmetric, but if they had been wonky that would have been interesting. Not everyone likes Bayesian statistics, but this post is already too long to go into that (very interesting) topic!

We use these (dark grey) distributions in the companion paper to assess how much the uncertainty in the SMB lapse rate affects the sea level projections. We try out the most probable lapse rate – the peak of the distribution – but also the high and low values from the extremes of the distribution. This way we can try and understand the full picture of the uncertainty. Is the effect ‘wonky': do the high and low values have symmetric effects on sea level? For example, you could imagine that the most probable SMB lapse rate amplifies sea level rise because it is quite large and positive, but maybe the lowest values in the distribution have little amplifying effect or could even, if negative (as they are for some of our other SMB lapse rates), reduce sea level rise compared with climate change alone. And how much of the true uncertainty of the climate and ice sheet models have we assessed here?

Tune in to the next blog post*** to find out…

* which comes from kilograms of ice per metre squared area per year (units of surface mass balance), per metre (units of height).

** More detail: the prior comes (mostly) from simulations where we lower the ice sheet height everywhere by a fixed amount of 50m or 100m. The test simulation has the kind of ice sheet height changes we expect in the future under climate change, which are up to a kilometre lowering at the edges and up to about 50m raising in the middle. This means the SMB lapse rate values that are best at representing this kind of MAR response are given the highest weight. Remember we are trying to represent MAR’s response in projections of the future, not represent the real world…

*** or read the open access companion paper

P.S. A reminder of moderation here: if you don’t see your comment appear for a while, it’s because you haven’t commented before (with those details) and I haven’t yet had time to read it and let it through. If I do want to moderate it, I usually try to do this transparently and tell you why. Comments on old posts tend to be ignored though (sorry again).

Category: aboutme, Bayesian, climatemodels, ice2sea, icesheetmodels, parameterisation, plainenglish, probability, sealevel, statistics, uncertainty | 2 Comments

Whether environmental modellers are wrong

[This is a comment invited by Issues in Science and Technology as a reply to the article “When All Models Are Wrong” in their Winter 2014 issue. The article is not online there but has been archived by My comment will appear in the Spring 2014 issue.]

I was interested to read Saltelli and Funtowicz’s article “When All Models Are Wrong”1, not least because several people sent it to me due to its title and mention of my blog2. The article criticised complex computer models used for policy making – including environmental models – and presented a checklist of criteria for improving their development and use.

As a researcher in uncertainty quantification for environmental models, I heartily agree we should be accountable, transparent, and critical of our own results and those of others. Open access journals — particularly those accepting technical topics (e.g. Geoscientific Model Development) and replications (e.g. PLOS One) — would seem key, as would routine archiving of preprints (e.g. and (ideally non-proprietary) code and datasets (e.g. Academic promotions and funding directly or indirectly penalise these activities, even though they would improve the robustness of scientific findings. I also enjoyed the term “lamp-posting”: examining only the parts of models we find easiest to see.

However, I found parts of the article somewhat uncritical themselves. The statement “the number of retractions of published scientific work continues to rise” is not particularly meaningful. Even the fraction of retraction notices is difficult to interpret, because an increase could be due to changes in time lag (retraction of older papers), detection (greater scrutiny, e.g., or relevance (obsolete papers not retracted). It is not currently possible to reliably compare retraction notices across disciplines. But in one study of scientific bias, measured by fraction of null results, Geosciences and Environment/Ecology were ranked second only to Space Science in their objectivity3. It is not clear we can assert there are “increasing problems with the reliability of scientific knowledge”.

There was also little acknowledgement of existing research on the question “Which of those uncertainties has the largest impact on the result?”: for example, the climate projections used in UK adaptation4. Much of this research goes beyond sensitivity analysis, part of the audit proposed by the authors, because it explores not only uncertain parameters but also inadequately represented processes. Without an attempt to quantify structural uncertainty, a modeller implicitly makes the assumption that errors could be tuned away. While this is, unfortunately, common in the literature, the community is making strides in estimating structural uncertainties for climate models5,6.

The authors make strong statements about political motivation of scientists. Does a partial assessment of uncertainty really indicate nefarious aims? Or might scientists be limited by resources (computing, person, or project time) or, admittedly less satisfactorily, statistical expertise or imagination (the infamous “unknown unknowns”)? In my experience modellers might already need tactful persuasion to detune carefully tuned models, and consequently increase uncertainty ranges; slinging accusations of motivation would not help this process. Far better to argue the benefits of uncertainty quantification. By showing that sensitivity analysis helps us understand complex models and highlight where effort should be concentrated, we can be motivated by better model development. And by showing where we have been ‘surprised’ by too small uncertainty ranges in the past, we can be motivated by the greater longevity of our results.

With thanks to Richard Van Noorden, Ed Yong, Ivan Oransky and Tony O’Hagan.


[1] Issues in Science & Technology, Winter 2014.

[2] “All Models Are Wrong”, now hosted by PLOS at

[3] Fanelli (2010): “Positive” Results Increase Down the Hierarchy of the Sciences, PLoS ONE 5(4): e10068.

[4] “Contributions to uncertainty in the UKCP09 projections”, Appendix 2.4 in: Murphy et al. (2009): UK Climate Projections Science Report: Climate change projections. Met Office Hadley Centre, Exeter. Available from

[5] Rougier et al. (2013), Second-Order Exchangeability Analysis for Multimodel Ensembles, J. Am. Stat. Assoc. 108: 503, 852-863.

[6] Williamson et al. (2013): History matching for exploring and reducing climate model parameter space using observations and a large perturbed physics ensemble, Climate Dynamics 41:7-8, 1703-1729.

Category: boxquote, climatemodels, decisionmaking, parameterisation, policy, statistics, uncertainty | 15 Comments

Pause for thought

Ed Hawkins, Doug McNeall and I have just had a commentary published called Pause for Thought. It’s part of a Nature Focus on the slowdown in global surface warming, which includes six commentaries plus new research by Seneviratne et al. on how the number of extreme hot days has continued increasing throughout the slowdown in the global average. Unfortunately it’s not open access, but the content is free for the next month with registration, and I can also put our article online in six months.

First, what is the slowdown? Since the late 1990s the global average surface temperature has increased more slowly than in the two decades before. In fact it is fairly flat, so it’s often called a pause or hiatus, though there is increasing evidence that it’s a slowdown not a complete stop.

Our piece is not about whether it is a pause or slowdown, or the various reasons this may have happened (one of them being a temporary increase in heat being transferred from the atmosphere to oceans). We write about communication of this topic, particularly online. It’s also the first Nature paper where the authors give their Twitter handles!

Climate model projections have shown periods of cooling of about this length, embedded within longer-term warming, since before this pause happened. But our communication of this expectation has not been good: it has been a surprise to public and journalists alike.

First, the IPCC Summaries for Policymakers have not been very clear that pauses could occur, at least until the most recent report (quotes from these are given in the article).

Second, climate scientists tend to show averages of many simulations, which smooths out any temporary changes in trend. Here is a figure that shows some individual simulations and how each one can have slowdowns at different times:


The role of variability in global temperatures. Observed global mean surface air temperatures (solid black line) and recent 1998–2012 trend (dashed black line), compared with ten projections from a global climate model (grey lines). The grey shading is the 16–84% spread (smoothed for clarity). Two different simulations are highlighted (blue), and trends for specific interesting periods are shown (red, green, purple lines). The highlighted simulation shows a strong warming in the 1998–2012 period, but a 15-year period of no warming around the 2030s. [Figure 1a from Hawkins et al. (2014), Nature Climate Change].

Third, the causes of slowdowns are complex and sometimes the desire to simplify means communication has been plain wrong:

Although the most recent decade is the warmest since 1850, this does not mean there is no pause, as some have seemed to suggest.

As a rough guide to public interest in recent years, we included a Google Trends graph for related searches:


Quantity of Google searches for the terms ‘global warming stopped’ (blue) and ‘global warming pause’ (red) over the period from January 2007 to December 2013, expressed as ‘relative interest’ with the highest monthly total given an index of 100. Accessed on 23 Jan 2014 and subject to change. [Figure 2 from Hawkins et al. (2014) Nature Climate Change.]

We believe the increase for ‘global warming stopped’ in early 2008 was driven by this New Statesman article and the peak in October 2012 m by this piece in the Mail Online. From March 2013, ‘global warming pause’ appears to have been popularised by another article in the Mail Online and one in The Economist. The peak in September 2013 is due to media coverage of the Summary for Policymakers IPCC Fifth Assessment Report Working Group I. If you want you can see some videos of me grinning too much to try and look approachable while talking about the IPCC report.

We point out the very active discussions in the Twittersphere and blogosphere, “often discussing rather complex technical issues from the latest literature”, but note that the amount of content from climate scientists is hugely outweighed by that from commentators (on any point of the opinion spectrum).

So we call on our fellow scientists to join us in these online conversations. I talked in a recent Open University panel on social media and climate sciencerecording here — about “flooding the market” (no pun intended) with climate scientists. This shares the ‘load’ for us all, by involving more experts from different research areas, and shows climate science and scientists most directly. We give some general recommendations about engaging with the public online:

Although online conversations can be unpredictable, rambunctious and frustrating, they are often personally and professionally rewarding…From our experience, the online ‘audience’ is often technically proficient, but neither captive nor necessarily interested or patient, so conversations are more successful than lessons. We always expect, and try, to learn something from those we seek to ‘teach’. Where there is a genuine uncertainty we must not ignore it. We find that being defensive, over-confident or dogmatic are not successful strategies. Humour and humility are useful in keeping people on board and one’s sanity intact.

We believe the complexity of the science and the public interest in the pause are not ‘difficulties’ to be avoided or glossed over but instead provide a fantastic opportunity to dig into the details of the science.

We should see the pause as an opportunity, offering a clear hook to explore exciting aspects of climate science; to draw back the curtain on active scientific discussions that are often invisible to the public. The pause is a grand ‘whodunnit’ at the edge of our scientific understanding…The challenge is to embrace the complexity of the situation, to acknowledge the uncertainty and the nuance, to welcome questions and investigation and show the process of climate science in good health. Online engagement would seem to be essential in this endeavour.

[Note on moderation policy: all comments are moderated by me, not PLOS. If there is a delay in viewing your comment it’s because I haven’t read it yet. If someone else’s comment appears it’s probably because they have commented before under that email address and are automatically allowed through. Too many links might also send you into a spam folder that I never check.]

Category: blogging, climatemodels, plainenglish, scicomm, uncertainty | 147 Comments

Nine Lessons and Carols in Communicating Climate Uncertainty

About a month ago I was invited to represent the Cabot Institute at the All Parliamentary Party Climate Change Group (APPCCG) meeting on “Communicating Risk and Uncertainty around Climate Change”. All Party Groups are groups of MPs and Lords with a common interest they wish to discuss, who meet regularly but fairly informally. Here are the APPGCC  APPCCG register, blog, Twitter and list of events.

The speakers were James Painter (University of Oxford), Chris Rapley (UCL) and Fiona Harvey (The Guardian), and the chair was (Lord) Julian Hunt (UCL). Rather than write up my meeting notes, I’ll focus on the key points.

[Disclaimer: All quotes and attributions are based on my recollections and note-taking, and may not be exact.]

1. People have a finite pool of worry

I’ll start with this useful phrase, mentioned (I think by Chris) in the discussion. Elke Weber describes this:

“As worry increases about one type of risk, concern about other risks has been shown to go down, as if people had only so much capacity for worry or a finite pool of worry. Increased concern about global warming may result in decreased concern about other risks…the recent financial crisis reduced concern about climate change and environmental degradation.” — “What shapes perceptions of climate change?”; pdf currently here)

Lessons: We cannot expect or ask people to worry about everything: concern about other issues can reduce concern about climate change, while evoking strong emotions about climate change can reduce concern about other issues. So Chris encouraged talking about opportunities, rather than threats, wherever possible.

2. People interpret uncertainty as ignorance

People often interpret the word “uncertainty” as complete ignorance, rather than, for example, partial ignorance(..!) or a well-defined range of possible outcomes. This may be due to language: “I’m not certain” is close to “I don’t know”.

Just as important is exposure to research science. Science is often presented as a book of facts, when in fact it is a messy process of reducing our uncertainty about the world. At a school this year the head teacher told us about an Ofsted inspection during which they had a fantastic science workshop, where groups of students solved challenging problems using real data. At the end of the day, the inspector said: “Fine, but wouldn’t it have been quicker to have told them the answer first?”

Lessons: Revolutionise the education system.

3. People are uncomfortable with uncertainty

Even when people do understand uncertainty, it can become a convenient rug under which to brush difficult decisions. Chris said that over-emphasising uncertainty leads to decision-making paralysis. When a decision invokes fear or anxiety (or, I would add, political disagreement), uncertainty can be used to dismiss the decision entirely.

“The Higgs boson”, Chris said, “was not a ball bearing found down the back of sofa, but a statistical result”. It was just possible it hadn’t been discovered. But it wasn’t reported this way. The Higgs, of course, does not invoke fear, anxiety or political disagreement (though please leave comments below if you disagree).

Lessons: Decision paralysis might be reduced by talking in terms of confidence rather than uncertainty. But perhaps more importantly…

4. People do accept the existence of risk

Finite worry and the problems of talking about uncertainty need not mean deadlock, James and Chris argued, because people do understand the concept of risk.  They accept there are irreducible uncertainties when making decisions. Businesses are particularly familiar with risk, of course. James mentioned that Harvard Business School is actively viewing climate change in this way:

“It’s striking that anyone frames this question in terms of ‘belief,’ saying things like, ‘I don’t believe in climate change,’… I think it’s better seen as a classic managerial question about decision-making under uncertainty.” — Forest L. Reinhardt, Business and Environment Institute faculty co-chair, HBS

Viewed in this way, the problem is not whether to make a decision based on uncertain or incomplete information, which is nearly always the case in other spheres (Chris: “Why should climate change be a special case required to have absolute certainty?”). The problem is whether the decision made is to bet against mainstream climate science:

“It seems clear that no one can know exactly what’s going to happen–the climate is a hugely complex system, and there’s a lot going on”….[The vast majority of the world’s scientists] may be wrong. But it seems to me foolish to bet that they are certainly wrong. — Rebecca Henderson, Business and Environment Institute faculty co-chair, HBS

Chris pointed out that the Technical Summary of the latest Intergovernmental Panel on Climate Change (IPCC) assessment of climate science uses the word “uncertainty” a thousand times and the word “risk” not at all, so it is not surprising the media focus on uncertainty. And how well humans understand risk is a matter worthy of much discussion. But as James writes:

“There is… a growing body of literature suggesting that risk language may be a good, or at least a less bad, way of communicating climate change to the general public”. — “Climate Change in the Media: Reporting Risk and Uncertainty”, (Executive Summary, page viii)

Lessons: Where possible, talk in terms of risk not uncertainty; see for example the IPCC report on extreme weather and, naturally, our book Risk and Uncertainty Assessment for Natural Hazards.

5. Scientists have little training

Most of us are not well trained – perhaps hardly at all – in science communication. But we must consider how the way we present numbers affects their interpretation. In 2007, the IPCC said the likelihood that most of global warming since the mid-20th century was caused by greenhouse gas emissions was assessed to be greater than 90%. This year they made a similar statement but the likelihood was 95% or greater. Chris said that if a journalist asked, “What does it mean to increase from 90% confident to 95% confident?”, a scientist could make this clearer with “[We think] the chance climate change is natural is now half as likely as before.”

He also pointed out that we don’t have training in how to deal with the “street fight” of the climate debate. In my experience, this is one of the two main reasons why most of my colleagues do not do public engagement (the other being time commitment).

Lessons: For communicating uncertainty and risk, I recommend For dealing with the street fight, my advice is first to start with a lot of listening, not talking, to get a feel for the landscape. And to talk to climate scientists already engaging on how to avoid and deal with conflict (if, indeed, they are avoiding or dealing with conflict…).

6. Journalists have little (statistical) training

The IPCC assessment reports use a “language” of uncertainty, where phrases such as “extremely likely” are given a specific meaning (in this case, 95% or greater likelihood). But James said that only 15% of media articles about this year’s report explained the meaning of this uncertainty language.

And in the discussion someone quoted a journalist as saying “The IPCC report says it has 95% confidence – what do the other 5% of the scientists think?” In other words, confusing the idea of a consensus and a confidence interval. There was a laugh at this in the room. But I think this is easily done by people who do not spend all day thinking about statistics. That would be: the majority of the human race.

Lessons: Er, many journalists could benefit from more statistical training. Here is what that might look like.

7. “Newspaper editors are extremely shallow, generally”

Fiona, her tongue only slightly in cheek, gave us this memorably-made and disappointing (if predictable) point.

Just because something is important it doesn’t mean it will get into a news outlet. An editor might go to a cocktail party, talk to their glamorous celebrity friends, hear some current opinion, and then the next day their paper says…

In other words, the social diary – including meetings with high profile climate sceptics – can have a substantial influence on the viewpoint taken. (Of course, she noted, the editor of The Guardian is a profound man, not influenced by such superficiality). To counter this we would need to go to influential people and whisper in their ears too. We would need to launch a prawn cocktail offensive – or more appropriately, as one wit suggested, a goats cheese offensive. You heard it here first. And last.

Lessons: Go to more cocktail parties hosted by influential people.

8. There are many types of climate sceptic

There was generally support of scepticism by the speakers. Chris said it was perfectly valid for the public to ask scientists “Can we see your working?”; in other words, to ask for more details, code and data. All the speakers said they don’t use the word “denier”.

James said we should not generalise, and described four types of sceptic: trend, attribution, impacts, and policy. A trend sceptic would not be convinced there is global warming; an attribution sceptic about how much is man-made; an impacts sceptic might say we don’t know enough about when and how severe the impacts will be; and a policy sceptic would take issue with how to tackle the problem. (Personally, I believe there are as many types of sceptic as there are sceptics, but that would be a longer list to write down). Fiona pointed out that one person can be all these types of sceptic, moving from one argument to another as a discussion progresses. Some thought this would be incoherent (i.e. kettle logic, contradictory arguments) but others thought it could be coherent to be sceptical for more than one of those reasons.

Lessons: Treat each sceptic as an individual (flower); don’t assume they are one type of sceptic when they may be another, or more than one.

9. Trust is important 

What determines people’s views on climate change? As James pointed out, there is evidence that what drives opinions is not science, or even the media (they determine only the topics of discussion), but political, cultural and social values. Fiona had said earlier in the meeting, “Climate change is more politicised than ever before in my lifetime: it is becoming a matter of right or left. This is very, very scary. If you allow this, you lose any hope of doing anything sensible about it.”

All this is true. But I’ll end with a slightly more optimistic quote, which I think was from Chris: “The sea change in the battle with tobacco companies was when the message got across that the adverts were not trustworthy.” I quote this not because I believe it is the same as the climate debate, and not because sceptics are untrustworthy (though some may be), but because I (some might say, choose to) interpret it to mean that trust is important. When people trust the messenger, the message is more likely believed.

Lessons: Other things are important, but sometimes communication is a matter of trust. I emphasise this point because it’s what I already believe; others may disagree (politely, please…).


I would have liked to add more references supporting the points made by the speakers, but ran out of time. Some are in James’ book mentioned above. Do please add them in the comments if you have them.

The title of this blogpost came from realising I had nine points to make and thinking of this set of shows curated by Robin Ince celebrating science, skepticism, and rationalism. If you’re in the UK this December, do go.


Corrections (9th Dec):

– Chris actually said the word ‘risk’ is used in the IPCC physical science group Technical Summary fewer than 100 times, rather than not at all.

– James said only 15% of media articles about the IPCC 2007 report explained the meaning of the uncertainty language, not this year’s.

Category: events, risk, scicomm, statistics, uncertainty | 188 Comments

No need to worry about Greenland’s waterslides

We’ve had a new study published about the slippery slopes of Greenland. If we’re right they’re not as slippery – and therefore as worrying – as we first thought.

Meltwater in Greenland

The entrance to a moulin. Photo: George Cave as part of ICE TRACERS.

Greenland’s ice sheet is not simply a giant ice cube, inert but for gradual erosion from climate change. It’s a dynamic, shifting landscape, a place of delicate balance between the forces that create ice and those that destroy it. Under its own immense weight, ice flows continuously towards the coasts and is lost whenever icebergs break off into the sea, or when the surface is warm enough to melt or sublimate. Ice is constantly replaced as falling snow compacts, or rain and meltwater freeze. These incomes and outgoings add up to an ice budget that changes with altitude (lower down the air is warmer, so there is more melting), the seasons, and long-term climate change. Future sea level depends on whether or not Greenland can balance its books or stays in the red.

But it’s not only the surface and edges of the ice sheet we need to watch: it’s the bottom too. Ice at the base is under immense pressure from the hundreds of metres of ice above it, which allows it to melt at temperatures below zero degrees Celsius. The meltwater flows out to sea, sandwiched between the ice sheet and the rock on which it sits, creating a waterslide that helps the ice flow more quickly than it would under gravity alone.

And meltwater on the surface doesn’t behave as we might expect. We call the meltwater that hasn’t refrozen ‘runoff’, but it doesn’t simply run downhill like water off a duck’s back. Most of it carves paths straight down through the ice, helped by crevasses on the surface. In the height of summer these ‘moulins’ become dangerous chutes of fast flowing water. The water reaches the bottom and adds to the waterslide effect. It’s been known since the 1970s that ice in mountain glaciers and parts of Greenland speeds up during the summer melt season.

Dr Liz Bagshaw throwing electronic tracers down a moulin. Photo: George Cave for ICE TRACERS.

Dr Liz Bagshaw throwing electronic tracers down a moulin. Photo: George Cave as part of ICE TRACERS.

Ten years ago, Jay Zwally and others put forward an idea that had profound implications for sea level. They suggested acceleration of the waterslide – ‘enhanced basal lubrication’, a.k.a. the ‘Zwally effect’ – could be an alarming mechanism for rapid sea level rise. As the climate warmed, the melting would increase, and the ice would speed up towards the coast. Ice would then be lost more quickly, not only by creating more icebergs, but also by melting more rapidly at these lower altitudes. The extra meltwater would feed the cycle, resulting in faster and faster ice loss.

We’re the first to test this idea using models of the Greenland ice sheet, comparing the effect on sea level with the usual baseline projections that don’t include the Zwally effect. What we found surprised us – it made very little difference. Even in our worst case scenario, enhanced basal lubrication by surface meltwater added only 8mm to sea level over two hundred years, less than five percent of the baseline sea level rise. In some of our tests, it even reduced it.

Our study has three parts: finding out how ice speed is affected by surface meltwater using observations, predicting climate change and its effect on surface melting using climate models, and predicting how these would affect ice flow and sea level using ice sheet models.

A speed limit for ice

Does ice speed up indefinitely as melting increases, or is there a natural speed limit? We’d like to measure this relationship in many different places and over many years, but unfortunately we only have ice speed observations for a single place (11 sites, along a line perpendicular to the coast in southwestern Greenland) and a few years (4 sites 2006-2010; 7 sites 2009-2010):

Location of 11 field sites (Shannon et al., 2013, PNAS).

Location of 11 field sites (Shannon et al., 2013, PNAS).

And not all of these have observations of runoff. So because we don’t have complete data, and because we want to make predictions with a climate model, we actually quantify how ice speed in the real world relates to runoff in the climate model. GPS receivers are left in the ice all year to measure ice speed. We convert this to “speed-up”: the ratio of the average speed throughout the year to the lower, winter speed. The faster ice moves in the summer relative to winter, the larger the speed-up. The meltwater runoff is simulated with the “MAR” regional climate model (covering only the area over Greenland, so that we could use higher resolution). Here is measured speed-up (vertical axis) plotted against simulated runoff (horizontal axis):

Measurements of ice speed-up versus climate model meltwater runoff (Shannon et al., 2013, PNAS)

Measurements of ice speed-up versus climate model meltwater runoff (Shannon et al., 2013, PNAS)

On the left-hand side there is a fairly straightforward relationship: more runoff means more speed-up. But past a certain point things are not so clear. Does the speed-up plateau, or even decrease? And either case, does this actually mean the Zwally effect is self-limiting? Clearly we don’t have enough data to make definitive statements.

So we make the loosest, most flexible statements we can. The graph below – sorry for the typos and watermark – shows the huge range of possibilities we consider. In the worst case scenario (“Max”), speed-up keeps increasing with runoff. But in the best case scenario (“Min”), it declines.

As above, but showing our best estimate of the relationship (solid line), our minimum and maximum estimate of speed-up for a runoff of 5 metres per year (dashed lines), and all the estimates in between (grey lines).

As above, but showing our best estimate of the relationship (solid line), our minimum and maximum estimates (dashed lines), and all the estimates in between (grey lines).

Zwally was concerned about something like the worst case scenario, where more meltwater always means faster ice. But how realistic is the best case scenario? Could the lubrication effect actually decrease with more meltwater?

Recently both observations and theory have indeed supported this kind of decline. It involves a massive change in plumbing. When there is not much runoff, the water forces its way to the coast through an inefficient web of pockets (‘cavities’) linked by narrow channels. This creates high water pressure, lifting the ice from the rock below it and making it slide. When runoff increases past a critical threshold, the extra water carves wide pipes (‘channels’) through the ice which are much more efficient at flushing the water out to sea. The water pressure drops: the waterslide slows down.

The important point about our study is that we do not assume either is more likely. We simply predict both their effects on sea level.

Runaway runoff?

We predict changes in climate and meltwater runoff for the SRES A1B emissions scenario using the same regional climate model as above (MAR), which takes inputs from a global climate model (ECHAM5). The prediction is about a tripling of the amount and area of meltwater runoff between 2000 and 2100. If the Zwally effect were important, we might expect such a large increase in runoff to lead to a large speed-up and therefore a large sea level rise.

Future ice flow and sea level

We use the climate change and runoff, and our relationship between runoff and speed-up, as inputs to four different models of the Greenland ice sheet, to predict changes in ice flow and sea level. (We want to make projections over two hundred years, so we assume runoff is constant from the year 2100 to 2200.) Across the board, the effects are very small.

The baseline sea level rise between 2000 and 2200 ranges from 163mm to 173mm for the four ice sheet models. Enhanced basal lubrication adds just 4-8mm to sea level in the worst case scenario, and between a 1mm rise and a 1mm lowering (relative to the baseline) in the best case scenario. We test this with another global climate model, and another emissions scenario, but the largest effect is always less than five percent. 

Why so small? No matter whether we use the best or worst case scenario, enhanced basal lubrication mainly changes the ice speed and distribution, rather than the amount lost to sea.

The ice speeds up inland, which makes it thinner. This could make it more vulnerable to melting, through the lower altitude. But we found the increase in melting was small, and in any case was compensated by the ice thickening at the edge from the faster flow.

In the best case scenario the ice is slower at the edge relative to the baseline, because of the decline in speed-up at higher runoff. This can mean that less ice is lost at the coast, so the sea level contribution is smaller: but the thicker ice at the edge partly, or totally, compensates, depending on the ice sheet model. This is why the best case scenario can either lower or raise sea level relative to the baseline.

In the worst case scenario, the ice is faster everywhere. But the effect on sea level is still small relative to the baseline sea level rise.

This is the first attempt to study whether meltwater lubrication of Greenland’s waterslides is important to future sea level. We find it is not. Instead we should focus on understanding changes in melting and snowfall and (to a lesser extenticebergs.

Update (13th Aug): Ruth Mottram points out it might be better named the Iken effect after the first person to discover the link between ice speed and the water pressure underneath, Almut Iken.

Category: ice2sea, icesheetmodels, news, parameterisation, plainenglish, sealevel, statistics | 11 Comments

Climate scientists must not advocate particular policies

This is an invited contribution to the Guardian Political Science blog.


As a climate scientist, I’m under pressure to be a political advocate.

This comes mainly from environmentalists. Dan Cass, wind-farm director and solar advocate, preferred me not to waste my time debating “denialist morons” but to use political advocacy to “prevent climate catastrophe”. Jeremy Grantham, environmental philanthropist, urged climate scientists to sound a “more desperate note…Be arrested if necessary”. A concerned member of the public judged my efforts at public engagement successful only if they showed ”evidence of persuasion”.

Others ask “what should we do?” At my Cheltenham Science Festival event Can we trust climate models? one of the audience asked what we thought of carbon taxes. I refused to answer, despite the chair’s repeated requests and joke (patronisingly; his aim was to entertain) that I “shouldn’t be embarrassed at my lack of knowledge”.

Even some of my colleagues think I should be clearer about my political beliefs. In a Twitter debate last month Gavin Schmidt, climate scientist and blogger, argued we should state our preferences to avoid accusations of hidden agenda.

I believe advocacy by climate scientists has damaged trust in the science. We risk our credibility, our reputation for objectivity, if we are not absolutely neutral. At the very least, it leaves us open to criticism. I find much climate scepticism is driven by a belief that environmental activism has influenced how scientists gather and interpret evidence. So I’ve found my hardline approach successful in taking the politics and therefore – pun intended – the heat out of climate science discussions. They call me an “honest broker”, asking for “more Dr. Edwards and fewer zealous advocates”. Crucially, they say this even though my scientific views are absolutely mainstream.

But it’s not just about improving trust. In this highly politicised arena, climate scientists have a moral obligation to strive for impartiality. We have a platform we must not abuse. For a start, we rarely have the necessary expertise. I absolutely disagree with Gavin that we likely know far more about the issues involved in making policy choices than [our] audience.

Even scientists that are experts – such as those studying the interactions between climate, economy, and politics, with “integrated assessment models” – cannot speak for us because political decisions necessarily depend on values. There are many ways to try to minimise climate change (with mitigation or geoengineering) or its impacts (adaptation) and, given a pot of money, we must decide what we most want to protect. How do we weigh up economic growth against ecosystem change? Should we prioritise the lives and lifestyles of people today or in the future? Try to limit changes in temperature or rainfall? These questions cannot be answered with scientific evidence alone. To me, then, it is simple: scientists misuse their authority if they publicise their preferred policy options.


Policy decisions on climate change: not black and white.

Some say it is safe to express our views with sufficient context: “this is just my personal opinion, but…”. In my experience such caveats are ignored. Why else would we be asked “what should we do?” by the public or media, if not with an expectation of expertise, or the desire for data to replace a difficult decision? Rather than being incoherent – “I don’t know much about policy, but I know what I like” – or dictatorial – “If I were to rule the world, I would do this” – we should have the courage and humility not to answer.

Others say it is simplistic and impossible to separate science from policy, or that all individuals are advocates. But there is a difference between giving an estimate of the consequences of a particular action and giving an opinion on how or whether to take that action; between risk assessment, estimating the probability of change and its effect on things we care about, and risk management, deciding how to reduce or live with that risk. A flood forecaster provides a map of the probability of flooding, but she does not decide what is an unacceptable level of risk, or how to spend the budget to reduce the risk (sea defences; regulation of building and insurance).

We must be vigilant against what Roger Pielke Jr. in The Honest Broker calls “stealth issue advocacy”: claiming we are talking about science when really we are advocating policy. This is clearly expressed by Robert T. Lackey:

“Often I hear or read in scientific discourse words such as degradation, improvement, good, and poor. Such value-laden words should not be used to convey scientific information because they imply a preferred…state [or ] class of policy options…The appropriate science words are, for example, change, increase, or decrease.” (Science, Scientists and Policy Advocacy)

I became a climate scientist because I’ve always cared about the environment, since a vivid school talk about the ozone layer (here, page 4) and the influence of my brother, who was green long before it was cool to be green. But I care more about restoring trust in science than about calling people to action; more about improving public understanding of science so society can make better-informed decisions, than about making people’s decisions for them. Science doesn’t tell us the answer to our problems. Neither should scientists.

Category: policy | 116 Comments

Debrief from Cheltenham

I’m at home, groggy but happy after our Cheltenham Science Festival event yesterday, “Can we trust climate models?”. It was exhausting, fun, exciting, and long…

I’ll write about the content of the event in future posts, but I wanted to jot down my impressions about its atmosphere, which I think was unique for an event of this kind.

I’d been nervous in the morning and early afternoon. My relaxed day of preparation was steadily eroded by things going slightly wrong, such as realising after my laptop battery ran out that I’d left my charger the other side of Bristol after Bright Club. I had a long list of papers I wanted to skim, notes I wanted to make, an introductory talk to write. Taking the train with my friend and colleague Jonty, I was too stressed to speak and had a sense of humour failure when he gently teased me about leaving my adapter somewhere (again, he said, though I can’t think which other time *cough* times he was thinking of…).

Arriving at the Science Festival site I felt more at ease. It’s like home. In the previous two years I’d come as a punter, had huge fun, loved the events and met extremely wonderful people. This was my first year as an event organiser, and as the time approached I became less nervous, in part because I couldn’t do any more preparation. It helped to take some time out for a gentle, fun radio interview by (the extremely wonderful) Timandra Harkness.

The event itself was an hour long, with introductory talks from the panel – myself, “climate agnostic” Jonathan Jones, professor of physics at the University of Oxford, and Claire Craig, science advisor in the UK government – followed by a few questions from chair Mark Lythgoe and many from the audience. The venue holds 2-300, depending on seating arrangements, and was almost sold out. After the main event we continued at the “Talking Point”, a small tent with informal seating, with around 50-70 of the audience. We took questions for, I think, another hour.

I spent more time battling with the chair than the other panel members! Mark repeatedly accused me of waffling and not answering the question. I told him I didn’t like his questions when they were ill-defined or about policy (I don’t make public statements about preferred policy options). Some of the audience questions were also a little heated, on both sides: those worrying about climate change, and those worrying about climate scientists.

Listening to @flimsin in talking point tent, sending a big virtual hug to her as she keeps getting interrupted mid explanation. #stressful –Amanda Woodman-Hardy (Cabot Institute)

But despite this, the mood of the event was absolutely wonderful throughout. Almost all the “battles” were respectfully teasing, filled with humour. We laughed a lot, which must be a first for a discussion about climate change, scepticism and policy! I put this down to the warm and respectful relationships between the panel members (even though Claire and Jonathan had only just met), to our joyfully provocative chair, and to the audience who quickly created the serious and light atmosphere we hoped for. It was a privilege to have such an interested and supportive audience, such thoughtful, interesting and honest co-members of the panel, and a fun chair who dug into us to make us react and think more deeply about our answers.

At the end of the event, Mark stepped back from his aim of provoking us for theatre and entertainment. He thanked us very sweetly for trying so hard to answer so many questions, and said we were the bravest panel he’d ever seen. And he sent a wonderful tweet afterwards:

Best panel on climate models yet thanks to star cast fab @flimsin articulate @nmrqip wise Claire Craig #cheltscifest brave and entertaining — Mark Lythgoe (chair)

It was an enormous pleasure to put on this event, and I learned a lot. Thank you to the Cabot InstituteBristol Environmental Risk Research Centre (BRISK)ice2sea and my department (School of Geographical Sciences at the University of Bristol) for sponsoring it, thank you to the Centre for Public Engagement for supporting me (including giving me an award! hurrah!), thank you to my panel members and chair for saying yes and working so hard, and thank you to all that came.

Related Posts Plugin for WordPress, Blogger...
Category: climatemodels, events, ice2sea, scicomm | 12 Comments